text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringlengths 9
15
⌀ | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
---|---|---|---|---|---|
Produce the highest quality screenshots with the least amount of effort! Use Window Clippings.
Update: Window Clippings 2.0 is now available! Download it now from.
Previously I introduced the concept of add-ins for Window Clippings from the user’s perspective. Today I want to talk about what it takes to develop an add-in.
Arguably the concept of an add-in has been around for a very long time. In modern computing, applications can be considered add-ins to the operating system. Many well-established applications like Internet Explorer, Firefox, Microsoft Office, etc. all support add-ins of some kind. As far as platforms go, COM provides a great foundation for developing add-ins and more recently the upcoming release of the .NET Framework is finally introducing plumbing to simplify extensibility for managed applications.
There are many considerations when designing extensibility into a product. Naturally the nature and design of a given product plays a part in shaping what can be done with add-ins and how they might be exposed or integrated. Window Clippings is a native Windows application that does not rely on the .NET Framework so naturally supporting add-ins developed with native code is important. Equally obvious is support for add-ins developed using managed code since many users will want to be able to whip up a quick add-in using their favorite .NET-supporting compiler.
Based on these scenarios and constraints I devised an extensibility model that allows add-ins to be developed in either native C++ using COM or using purely managed code using your favorite .NET compiler. Both native and managed add-ins are first-class add-ins and both receive the same treatment from Window Clippings as far as integration and feature support goes. Window Clippings also starts as a native application but will load the CLR on demand in the event that any managed add-ins are being used. You can of course continue to use Window Clippings on platforms that may not have the .NET Framework installed such as Windows XP or Windows Server Core.
I will start by discussing native add-ins but if you’re only interested in managed add-ins then feel free to skip ahead as managed add-ins are quite a lot simpler since the WindowClippings.dll managed assembly takes care of all of the plumbing.
Writing add-ins using native C++
Add-ins are packaged as COM servers. You can include as many add-ins as you wish in a server. Typically this involves creating a DLL project in Visual C++, exporting DllGetClassObject and friends, and implementing one COM class for each add-in. You can of course use whatever language, compiler and packaging model as long as you fulfill the responsibilities of a COM server.
The WindowClippings.h header file defines the IAddIn interface as well as the three interfaces that derive from it and represent the three types of add-ins supported by Window Clippings namely Filter, Save As and Send To add-ins. IAddIn is defined as follows:
struct DECLSPEC_UUID("...") DECLSPEC_NOVTABLEIAddIn : IUnknown{ virtual HRESULT STDMETHODCALLTYPE get_Location(__out BSTR* location) = 0; virtual HRESULT STDMETHODCALLTYPE get_Name(__out BSTR* name) = 0; virtual HRESULT STDMETHODCALLTYPE get_HasSettings(__out BOOL* hasSettings) = 0; virtual HRESULT STDMETHODCALLTYPE LoadSettings(IStream* source) = 0; virtual HRESULT STDMETHODCALLTYPE SaveSettings(IStream* destination) = 0; virtual HRESULT STDMETHODCALLTYPE EditSettings(HWND parent) = 0;};
get_Location must be implemented and provides the fully-qualified path for the file that contains the add-in. This is used by Window Clippings to allow the user to easily unregister a given add-in. You could use the GetModuleFileName function to implement this method.
get_Name must be implemented and provides the display name for the add-in.
get_HasSettings must be implemented and indicates whether the add-in has configurable settings. If get_HasSettings return false through its hasSettings parameter then the next three methods will not be called and need not be implemented.
LoadSettings is called by Window Clippings prior to using the add-in and allows the add-in to load any configuration settings that were previously saved. This method should return E_NOTIMPL if get_HasSettings returns false.
SaveSettings is called by Window Clippings after the EditSettings method is called and the user chose to apply his or her changes. This method should return E_NOTIMPL if get_HasSettings returns false.
EditSettings is called by Window Clippings when the user chooses to edit the add-in’s settings. The method must display a modal dialog box with configuration settings. It should return E_NOTIMPL if get_HasSettings returns false.
The following class template may be used to simplify developing add-ins that are not configurable:
template <typename T>class NoSettingsAddIn : public T{private:
STDMETHODIMP get_HasSettings(__out BOOL* hasSettings) { HR_(E_POINTER, 0 != hasSettings);
*hasSettings = false; return S_OK; }
STDMETHODIMP LoadSettings(IStream* /*source*/) { return E_NOTIMPL; }
STDMETHODIMP SaveSettings(IStream* /*destination*/) { return E_NOTIMPL; }
STDMETHODIMP EditSettings(HWND /*parent*/) { return E_NOTIMPL; }
};
Add-ins must also be registered to implement the WindowClippingsCategory category (also defined in WindowClippings.h).
Filter add-ins implement the IFilter interface which is defined as follows:
struct DECLSPEC_UUID("...") DECLSPEC_NOVTABLEIFilter : IAddIn{ virtual HRESULT STDMETHODCALLTYPE Process(Gdiplus::BitmapData* bitmapData) = 0;};
So in addition to implementing IAddIn, you need to implement the Process method. The single BitmapData parameter is borrowed from GDI+ and provides the attributes of the image that Window Clippings has captured. The Process method may manipulate the bitmap directly before returning.
Save As add-ins implement the ISaveAs interface which is defined as follows:
struct DECLSPEC_UUID("...") DECLSPEC_NOVTABLEISaveAs : IAddIn{ virtual HRESULT STDMETHODCALLTYPE get_Extension(__out BSTR* extension) = 0;
virtual HRESULT STDMETHODCALLTYPE Save(Gdiplus::BitmapData* bitmapData, COLORREF backColor, IStream* destination) = 0;};
Save As add-ins are called by the built-in “Save to disk” add-in to save the image in the user’s chosen format.
get_Extension provides the file extension for the particular format that the add-in provides. This is called by Window Clippings when automatically generating a file name. It is also used to populate the filter combo box, along with the get_Name method, in the Save As dialog box if the user prefers to provide a file name directly.
The Save method is where you do the actual work of formatting the bitmap in your particular format and write the results to the stream. The BitmapData parameter provides a working copy of the image that Window Clippings has captured. A copy is provided so that you can freely manipulate it prior to formatting and saving the image. This can be useful depending on your image format’s capabilities. For example, the bitmap has an alpha channel but some formats don’t support that and you may need to “flatten” the image first. The COLORREF parameter indicates the user’s chosen background color. This should be used if you need to remove the alpha channel and replace it with the background color. Finally, the IStream is where you save the image.
Send To add-ins implement the ISendTo interface which is defined as follows:
struct DECLSPEC_UUID("...") DECLSPEC_NOVTABLEISendTo : IAddIn{ virtual HRESULT STDMETHODCALLTYPE Send(Gdiplus::BitmapData* bitmapData, BSTR title, COLORREF backColor, ISaveAs* saveAs) = 0;};
The Send method is called by Window Clippings while executing the user’s action sequence. The BitmapData parameter provides a working copy of the image captured by Window Clippings. The image may have already been altered by any previous Filter add-ins. The BSTR parameter provides the title of the selected window, if any. This may be used if the destination of the Send To add-in can use or display it somehow. The COLORREF parameter indicates the user’s chosen background color. This should be used if the destination of the Send To add-in does not support an alpha channel. Finally, the ISaveAs parameter provides the Save As add-in that should be used in the event that the Send To add-in saves the image to a file. This might be useful if you’re writing an add-in to send to an FTP server for example.
The Window Clippings website will include developer information and complete working samples for each type of add-in when version 2.0 is released.
Writing add-ins using managed code
Welcome back Daniel. :)
Window Clippings 2.0 includes the WindowClippings.dll assembly that makes developing add-ins a breeze. Here’s what it looks like in .NET Reflector:
As you can see, it only has four public types. A bunch of internal types take care of all the hard work of making COM interop seamless, taking care of laying out the managed interfaces so that the vtables line up just right and the add-ins register themselves correctly. Not only that, but WindowClippings.dll is 100% MSIL without a hint of native code, making it platform agnostic. This means that any managed add-ins you develop will work with both 32-bit and 64-bit versions of Window Clippings (assuming your add-in assembly is not platform specific).
AddInRegistrar is for internal use only and should not be used. It is used by the (native) Window Clippings application to register managed add-ins.
Filter is an abstract class that you must derive from to implement a Filter add-in. Here’s an example of a filter add-in:
[Guid("...")]public class FilterSample : Filter{ public FilterSample() { Name = "Filter sample"; }
protected override void Process(Bitmap bitmap) { bitmap.SetPixel(0, 0, Color.Red); }}
Remember that every add-in class needs a unique GUID that you set using the Guid attribute. In this example the constructor sets the display name of the add-in and the abstract Process method is implemented with a simple modification of the bitmap. Now let’s make it configurable. First you need to set HasSettings to true in the constructor. You then need to override the protected virtual LoadSettings, SaveSettings and EditSettings methods. Here’s the updated example:
[Guid("...")]public class FilterSample : Filter{ public FilterSample() { HasSettings = true; m_color = Color.Red; // default
UpdateName(); }
protected override void Process(Bitmap bitmap) { bitmap.SetPixel(0, 0, m_color); }
protected override void LoadSettings(BinaryReader reader) { m_color = Color.FromArgb(reader.ReadInt32()); UpdateName(); }
protected override void SaveSettings(BinaryWriter writer) { writer.Write(m_color.ToArgb()); }
protected override void EditSettings(IWin32Window parentWindow) { using (ColorDialog dialog = new ColorDialog()) { dialog.FullOpen = true; dialog.Color = m_color;
if (DialogResult.OK == dialog.ShowDialog(parentWindow)) { m_color = dialog.Color; UpdateName(); } } }
private void UpdateName() { Name = string.Format("Filter sample ({0})", m_color); }
private Color m_color;}
And here’s what it looks like when you click the Settings button for this add-in:
If you look closely you’ll see a blue pixel in the top-left corner of the image thanks to this add-in.
Notice that I update the add-in’s Name property whenever the color is updated. Window Clippings will query the Name property after the user changes the add-in’s settings so this allows you to customize the name based on how the add-in is configured.
The Name and HasSettings properties as well as the LoadSettings, SaveSettings and EditSettings methods work identically for the other types of add-ins so I’m not going to discuss them again. Rest assured that Save As and Send To add-ins can provide configurable settings in just the same way.
SaveAs is an abstract class that you must derive from to implement a Save As add-in. Here’s an example of such an add-in:
[Guid("...")]public class SaveAsGif : SaveAs{ public SaveAsGif() { Name = "GIF image"; Extension = "gif"; }
public override void Save(Bitmap bitmap, Color backColor, Stream destination) { bitmap.Save(destination, ImageFormat.Gif); }}
You must set the Extension property to the file extension of the particular format your add-in provides. You must also override the Save method to take care of saving the bitmap to the destination stream. This example simply uses the Bitmap’s Save method to save the image using the GIF image format. Keep in mind that Bitmap will always be a 32-bit image including an alpha channel. Given an image format, such as GIF (GIF uses a color mask for transparency), which cannot handle the alpha channel, you need to remove any transparency from the image during or prior to formatting it. That’s where the Color parameter comes in. This is the background color chosen by the user and needs to be blended into the image to remove the alpha channel if your image format does not support it. Here’s a simple example:
public override void Save(Bitmap bitmap, Color backColor, Stream destination){ FlattenBitmap(bitmap, backColor);
bitmap.Save(destination, ImageFormat.Gif);}
private void FlattenBitmap(Bitmap bitmap, Color backColor){ for (int x = 0; x < bitmap.Width; ++x) { for (int y = 0; y < bitmap.Height; ++y) { Color color = bitmap.GetPixel(x, y);
color = Color.FromArgb(color.R * color.A / 255 + backColor.R * (255 - color.A) / 255, color.G * color.A / 255 + backColor.G * (255 - color.A) / 255, color.B * color.A / 255 + backColor.B * (255 - color.A) / 255);
bitmap.SetPixel(x, y, color); } }}
SendTo is an abstract class that you must derive from to implement a Send To add-in. Here’s an example of such an add-in:
[Guid("...")]public class SendToSample : SendTo{ public SendToSample() { Name = "Send to sample"; }
protected override void Send(Bitmap bitmap, string title, Color backColor, SaveAs saveAs) { // TODO: send image to destination }}
You must override the Send method to take care of sending the image to its destination. This destination can be anything you can imagine such as a web service, database or application. The string parameter provides the original title of the window. The Color parameter provides the background color chosen by the user in the event that you need to flatten the image. The SaveAs parameter provides the add-in chosen by the user to handle image formatting in the event that your destination can handle any format.
That’s it for today. I hope this article provided you with a good idea of how to develop add-ins for Window Clippings.
Stay tuned for more highlights from the upcoming Window Clippings 2.0!
© 2007 Kenny Kerr
Do you need to register the native plugins..err add-in in HKCU/HKLM like a normal COM dll or are you rolling your own COM like handling?
Will .net add-ins run into the same problem as explorer shell exstensions written in .net ( Only one version of .net runtime in a process )
ac: good questions.
Native add-ins need to provide the regular DllRegisterServer and DllUnregisterServer exports but Window Clippings will call them directly when the user clicks the “Register Add-In” and “Unregister Add-In” buttons so there’s no need for you to call regsvr32 or the like.
The same issues apply to managed add-ins as with shell extensions. I’m considering pre-loading a specific version of the CLR so that it is more deterministic. That way a user can configure WC to use .NET 3.5 and load both .NET 2.0 and 3.5 add-ins predictably.
How do you load managed assemblies? I understand that your appplication is a native app and not managed.
gyurisc: Window Clippings takes care of registering the assembly for COM interop. So when it comes to loading a managed add-in, Window Clippings doesn’t know the difference and doesn’t care. | http://weblogs.asp.net/kennykerr/archive/2007/04/25/looking-forward-to-window-clippings-2-0-add-in-development.aspx | crawl-002 | refinedweb | 2,532 | 56.86 |
Tkinter tinkering (Graphical User Interfaces)
July 18, 2011 12 Comments”.
In order to use a GUI, we need to use an external module to do all the grunt work behind making the interface components and presenting them to the end user. There are a number of different modules which can be used. We are going to use one called “Tkinter“. We are going to use Tkinter because it should come as part of every Python installation, so there’s no additional downloading to do, nor any need to get the Responsible Adult involved – but leave a comment if you have problem with Tkinter.
To start using Tkinter is pretty easy. You do this:
>>> from Tkinter import *
and… absolutely nothing should happen!
DANGER WILL ROBINSON!
A word of warning here: as a general rule using “from X import *” is considered really bad form in Python because it means every object (* is a “wildcard” and means everything) is imported by its own name into your program, rather than part of the X namespace. So, in our previous examples we’ve used import random, then accessed the randint() function via the random namespace: random.randint(). Had we used from random import *, then we could have said randint() and not random.randint(). However, this would be a bad thing to do. If you have two packages X and Y, each of which has its own object called x, then, if you use “import *” the two objects X.x and Y.x will both end up in your program called ‘x’. As you don’t have the X and Y namespaces to distinguish them, they ‘collide’.
So, why am I using “import *” for you? Because Tkinter is an exception. The Tkinter package has been designed so that as few objects as possible are imported into your program, and because their names are specifically GUI related, so there is less risk of a naming collision. If you are feeling uneasy about all this do multiple imports of the specific Tkinter components you need.
Hello World in the GUI
Graphical User Interfaces use a specific, common set of graphical components to display information to a user and to get feedback from them. These components are called “widgets”. As you are using a graphical interface all of the time, you are actually already aware of widgets, but you just don’t realise that they’re there. So, for example, whenever you are presented with an “Ok/Cancel” dialog on the computer, the text of the dialog (“About to delete all your files”) is presented in a “Label” widget, while the “Ok” and “Cancel” are “Button” widgets.
So let’s do a label:
>>> labelWidget = Label(None,text="Hello Python4Kids!") # note: None has a capital
When you hit return you should see something like this (note: if you are running python-idle, this won’t work – at least not yet, run Python from a command line shell. If you don’t know what python-idle is, you can ignore this note):
Python has told Tkinter it’s going to need a window to put a Label widget in. Tkinter has asked the operating system to give it a window and the operating system has given it a default window. Along the top of the window are a number of widgets which have been provided by your operating system (not Python). Your window may look a little different if you’re running an alternative operating system. On my system above, there are 6 widgets – from left to right, a menu widget (X), a button (to pin this window across multiple desktops), a label ‘tk’, and three more buttons (minimise, maximise and close). You might have a different set of operating system widgets.
Where is the label we defined? Well, it exists:
>>> labelWidget <Tkinter.Label instance at 0x7fcf9966e368>
However, it’s not visible yet. It’s not visible yet because we haven’t defined a “geometry” for it. We do this through a function which is part of the widget called .pack() (.pack() is called a ‘method’ of the widget, but we haven’t done that yet).
Debugging tip: If you can’t see your widget, make sure you’ve .pack()ed it.
So let’s .pack() it:
>>> labelWidget.pack()
The .pack() method can take a heap of arguments, which define how the widget is to be placed within the window. As we only have a single widget, we just use the default packing, and therefore put no arguments in. You should see the tk window above change to look something like this:
One of the widgets in the title bar is obscured here (only four are visible), but grabbing the bottom right hand corner of the window will resize it allowing you to see the lost widgets:
To close the window click the close button in the top right corner.
That’s it for now, more on GUIs in the coming tutes.
PS:
Did you notice that we did a basic GUI interface in only three lines of code (including the import statement)??? Is that amazing?
PPS:
Hello to visitors from the Podnutz webcast.
Pingback: Python 4 Kids: Tkinter tinkering (Graphical... | Python | Syngu
Pingback: Linux News » Python4Kids: New Tutorial – Tkinter tinkering (Graphical User Interfaces)
Pingback: Python 4 Kids: Tkinter tinkering (Graphical User Interfaces) | devblogging.com
Pingback: Links 20/7/2011: Ubuntu 12.04 LTS Release Schedule, Linux-powered HP TouchPad Competitive With iPad, New OLPC Surfaces | Techrights
Pingback: Python Links for the Week: 7/22/2011 « The Mouse Vs. The Python
There was a problem with tkinler (or whatever). I know it wasnt in my typing [mainlly because i cut and pasted like and first world american :)] it said in line 100802 or something that the pack in the third line for the first button had bee terminated? i have no earthly idea. anyway awesome/ helpful site
There was a problem with tkinler (or whatever). I know it wasnt in my typing [mainly because i cut and pasted like any first world american :)] it said in line 100802 or something that the pack in the third line for the first button had bee terminated? i have no earthly idea. anyway awesome/ helpful site (sorry for two comments… i forgot to make my spelling legebal)
It is very difficult to diagnose the problem from your comments.
You can try:
saving this to a file
(if on Windows) giving the file extension as .pyw
My python installation didn’t have it (in Ubuntu), I had to install python-tk.
Why you posting that here, complain the mighty Ubuntu team.
Seriously though, would advise users to look into wxPython. Only way to fly.
Pingback: Minecraft Config Editor: Tkinter Text Widget and Frame « Python Tutorials for Kids 8+
what OS do you use? | http://python4kids.wordpress.com/2011/07/18/tkinter-tinkering-graphical-user-interfaces/ | CC-MAIN-2014-42 | refinedweb | 1,128 | 62.68 |
Introduction to Python Magic Method
Magic methods are a collection of pre-defined functional method from the python library functions that cannot be declared or called directly. Instead, these functions can be called or invoked by executing some other related methods in the code snippet. This type of methods are simple to use and implement, as it does not require specific or any kind of extra manual effort from the programmer. Hence it is named as the ‘Magic Method’.
What is Python Magic Method?
- Python is an interpreted, object-oriented programming which gives you the ability to write procedural code and/or object-oriented. As we know that creating Objects simplifies complicated data structures handling. In addition to that magic methods eases the ability to create object-oriented programming.
- Before Diving into what is a magic method let’s understand why in the first place they are created?
- Below is one example of class one using a magic method and the other is without the magic method. In the first one __init__ magic method is used, which can be used to initialize more than one instance variable in one go. A class Sports is defined as taking two instance variables into account that is name and sport.
- Both instance variables can be defined in one go using the __inti__ magic method. In case 2 the same thing is repeated but this time we are using a set method to initialize the instance variable. Here for 2 variables, we have to call this method twice.
Here we can see the magic of magic method, in one go we can define more than one variable instances.
Code:
class Sports():
def __init__(self,name,sport):
self.name = name
self.sport= sport
def get_name(self):
return self.name
def get_sport(self):
return self.sport
first = Sports('john','Game of Thrones')
print(first.get_name())
print(first.get_sport())
Output:
Code:
class Sports():
def set_name(self,name):
self.name = name
def set_sport(self,sport):
self.sport= sport
def get_name(self):
return self.name
def get_sport(self):
return self.sport
second = Sports()
second.set_name('Messi')
second.set_sport('Soccer')
print(second.get_name())
print(second.get_sport())
Output:
So Basically, magic methods are something which can ease the object-oriented programming.
Now Let’s understand What are these?
- Magic methods are everything for object-oriented Python.
- Python magic methods can be defined as the special methods which can add “magic” to a class.
- These magic words start and end with double underscores, for example, _init_ or _add_.
Python Magic Methods
Python has many built-in magic methods to name some are:
- __init__
- __new__
- __del__
- __abs__
- __add__
- __sub__
- __mul__
We will discuss some of the magic methods to understand it better.
Now let’s take the __add__ magic method:
A=5
A+3
Output: 8
The same can be performed with the __add__ magic method.
A.__add__(5)
Output: 10
Here the operator plus is used for adding a numerical value to numerical variable A. The same can be performed using the built-in __add__ magic method. However, as we have discussed, magic methods are not supposed to be called directly, but internally, through some other methods or actions.
Components
To be specific we can segregate magic methods in different categories instead of describing components.
1. Object Constructor and Initialiser
- Magic Methods are used widely in python programming basics in class construction and Initialization.
- As we have discussed __init__ magic method. This method is been used to define the initialization of an object in the class.
- __init__ is not the first method to be invoked for object creation however, the first magic method __new__ is invoked which creates a new instance and then calls __init__ magic method.
Let’s See the example of the same:
class AbstractClass(object):
def __new__(cls, a, b):
print("calling magic method__new__")
instance = super(AbstractClass, cls).__new__(cls)
instance.__init__(a, b)
def __init__(self, a, b):
print('calling magic method__init__')
print ("Initializing Instance", a, b)
a=AbstractClass(2,3)
Output:
calling magic method__new__
calling magic method__init__
Initializing Instance 2 3
__new__ can be used in a wide variety of ways, but this shows a simple example of this magic method.
2. Comparison Magic methods
Python has a number of magic methods that are designed to do intuitive comparisons between objects with the use of operators.
Some are listed below:
- __lt__(self, other): is used to be called on comparison using < operator.
- __le__(self, other): is used to be called on comparison using <= operator.
- __eq__(self, other): is used to be called on comparison using == operator.
- __ne__(self, other): is used to be called on comparison using != operator.
- __ge__(self, other): is used to be called on comparison using >= operator.
3. Infix Operators
Python has typical built-in binary operators as magic methods.
Some are listed below:
- __add__ (self, other): for addition
- __sub__ (self, other): for subtraction
- __mul__(self, other): for multiplication
Advantages of Python Magic Method
Python provides “magic methods” because they really perform magic for your program.
The biggest advantages are:
It provides a simple way to make objects behave like built-in types, which means one can avoid counter-intuitive or nonstandard ways of performing basic operators. For e.g.,
we have two dictionaries ‘dicta’ and ‘dicta’
dicta = {1 : "XYZ"}
dictb = {2 : "LMN"}
dict1 + dict2
Output:
Traceback (most recent call last):
File “python”, line 1, in <module>
TypeError: unsupported operand type(s) for +: ‘dict’ and ‘dict’
Now, this ends up with a type error because the dictionary type doesn’t support addition. Now, we can extend the dictionary class and add “__add__” magic method:
class AddDict(dict):
def __add__(self, dicts):
self.update(dicts)
return AddDict(self)
dicta = AddDict({1 : "XYZ"})
dictb = AddDict({2 : "LMN"})
print (dicta + dictb)
Now, we are getting the desired output.
{1: ‘XYZ’, 2: ‘LMN’}
Thus, suddenly magic happened just by adding a magic method. The error vanished which we were getting earlier.
Conclusion
Magic methods are special methods that are invoked indirectly and are identified with dunders or double underscores like __add__. Now to use better of Python classes one must know at least some magic method like __init__, __new__. Comparison operator magic methods give python an edge where instances of objects can be compared for equality as well as inequality. In a nutshell magic method do magic to python programming by reducing complexity.
Recommended Articles
This is a guide on Python Magic Method. Here we discuss the introduction to Python Magic Method, it’s components and advantages as well as some examples. You can also go through our other suggested articles to learn more– | https://www.educba.com/python-magic-method/?source=leftnav | CC-MAIN-2021-04 | refinedweb | 1,096 | 55.44 |
I have one thread that is needed to be used multiple times. I dont want to create multiple instnces of this thread but I would like it to be used multiple time by another class.
Here's my code:
public class Restaurant { public static void main(String args[])throws InterruptedException { int i=0; final int NUMTHREADS = 3; // number of threads Customer thr[] = new Cutomer[NUMTHREADS]; Thread myThread[] = new Thread[NUMTHREADS]; Waiter r_thr = new Waiter(); Thread rThread = new Thread(r_thr); rThread.start(); // create threads for( i = 0; i < NUMTHREADS; ++i ) { thr[i] = new Customer(i); myThread[i] = new Thread( thr[i] ); // eah array hold an instance of sem1 myThread[i].start(); // start the thread } } }
As of now, my waiter thread runs once and quits. I want it to run multiple times ( in this case, 3 times).
Any help would be appreciated. | http://www.dreamincode.net/forums/topic/270343-how-to-reuse-a-thread-multiple-times/page__p__1573279 | CC-MAIN-2016-07 | refinedweb | 140 | 70.13 |
What
Using TOPSIS-Python TOPSIS-Python can be run as in the following example:
import numpy as np from topsis import topsis a = [[7, 9, 9, 8], [8, 7, 8, 7], [9, 6, 8, 9], [6, 7, 8, 6]] w = [0.1, 0.4, 0.3, 0.2] I = np.array([1, 1, 1, 0] decision = topsis.topsis(a, w, I) The decision matrix (a) should be constructed with each row representing an alternative, and each column representing a criterion. We have used an example given in TOPSIS Method in MADM (Dr. Farhad Faez)
Weights (w) is not already normalised will be normalised upon initialisation. Information on benefit (1) cost (0) criteria should be provided in I.
By default, the optimisation (TOPSIS calculation) does not take place. No values are stored in decision.C or decision.optimum_choice.
These can be calculated, either by calling decision.calc(), or by calling a representation of the decision (which will itself call decision.calc()):
decision
Alternatives ranking C: [0.74269409 0.40359933 0.17586999 0.44142927]
Best alternative a[0]: [7. 9. 9. 8.]
The rankings are saved in decision.C, with the highest ranking $0.74269409$ offering us the best decision, and lowest ranking $0.17586999$ offering the worst decision making, according to TOPSIS method.
We are also then shown the best alternative index (which happens to be index 0 in this example), and the associated criteria coefficients of this alternative. | https://libraries.io/pypi/topsis-antuanant | CC-MAIN-2021-21 | refinedweb | 238 | 59.9 |
A time validator. More...
#include <Wt/WTimeValidator.h>
Creates a new WTimeValidator.
The validator will accept only times within the indicated range bottom to top, in the time formate
format::WRegExpValidator.
Sets the lower limit of the valid time range.
The default is a null time constructed using WTime()
Sets the validator format.
Sets the upper limit of the valid time range.
The default is a null time constructed using WTime()
Validates the given input.
The input is considered valid only when it is blank for a non-mandatory field, or represents a time in the given format, and within the valid range.
Reimplemented from Wt::WRegExpValidator. | https://webtoolkit.eu/wt/doc/reference/html/classWt_1_1WTimeValidator.html | CC-MAIN-2021-31 | refinedweb | 107 | 60.01 |
.
CS143
Summer 2011
Handout 10
June 29 th , 2011
Bottom-Up Parsing
Handout written by Maggie Johnson and revised by Julie Zelenski.
Bottom-up parsing
As the name suggests, bottom-up parsing works in the opposite direction from top- down. A top-down parser begins with the start symbol at the top of the parse tree and works downward, driving productions in forward order until it gets to the terminal leaves. A bottom-up parse starts with the string of terminals itself and builds from the leaves upward, working backwards to the start symbol by applying the productions in reverse. Along the way, a bottom-up parser searches for substrings of the working string that match the right side of some production. When it finds such a substring, it reduces it, i.e., substitutes the left side nonterminal for the matching right side. The goal is to reduce all the way up to the start symbol and report a successful parse.
In general, bottom-up parsing algorithms are more powerful than top-down methods, but not surprisingly, the constructions required are also more complex. It is difficult to write a bottom-up parser by hand for anything but trivial grammars, but fortunately, there are excellent parser generator tools like bison that build a parser from an input specification, not unlike the way flex builds a scanner to your spec.
Shift-reduce parsing is the most commonly used and the most powerful of the bottom-up techniques. It takes as input a stream of tokens and develops the list of productions used to build the parse tree, but the productions are discovered in reverse order of a top- down parser. Like a table-driven predictive parser, a bottom-up parser makes use of a stack to keep track of the position in the parse and a parsing table to determine what to do next.
To illustrate stack-based shift-reduce parsing, consider this simplified expression grammar:
S
–>
E
T | E + T
T
id | (E)
The shift-reduce strategy divides the string that we are trying parse into two parts: an undigested part and a semi-digested part. The undigested part contains the tokens that are still to come in the input, and the semi-digested part is put on a stack. If parsing the string v, it starts out completely undigested, so the input is initialized to v, and the stack is initialized to empty. A shift-reduce parser proceeds by taking one of three actions at each step:
2
Reduce: If we can find a rule A –> w, and if the contents of the stack are qw for some q (q may be empty), then we can reduce the stack to qA. We are applying the production for the nonterminal A backwards. For example, using the grammar above, if the stack contained (id we can use the rule T –> id to reduce the stack to (T.
There is also one special case: reducing the entire contents of the stack to the start symbol with no remaining input means we have recognized the input as a valid sentence (e.g., the stack contains just w, the input is empty, and we apply S –> w). This is the last step in a successful parse.
The w being reduced is referred to as a handle. Formally, a handle of a right sentential form u is a production A –> w, and a position within u where the string w may be found and replaced by A to produce the previous right- sentential form in a rightmost derivation of u. Recognizing valid handles is the difficult part of shift-reduce parsing.
Shift:
If it is impossible to perform a reduction and there are tokens remaining in the undigested input, then we transfer a token from the input onto the stack. This is called a shift. For example, using the grammar above, suppose the stack contained ( and the input contained id+id). It is impossible to perform a reduction on ( since it does not match the entire right side of any of our productions. So, we shift the first character of the input onto the stack, giving us (id on the stack and +id) remaining in the input.
Error:
If neither of the two above cases apply, we have an error. If the sequence on the stack does not match the right-hand side of any production, we cannot reduce. And if shifting the next input token would create a sequence on the stack that cannot eventually be reduced to the start symbol, a shift action would be futile. Thus, we have hit a dead end where the next token conclusively determines the input cannot form a valid sentence. This would happen in the above grammar on the input id+). The first id would be shifted, then reduced to T and again to E, next + is shifted. At this point, the stack contains E+ and the next input token is ). The sequence on the stack cannot be reduced, and shifting the ) would create a sequence that is not viable, so we have an error.
The general idea is to read tokens from the input and push them onto the stack attempting to build sequences that we recognize as the right side of a production. When we find a match, we replace that sequence with the nonterminal from the left side and continue working our way up the parse tree. This process builds the parse tree from the leaves upward, the inverse of the top-down parser. If all goes well, we will end up moving everything from the input to the stack and eventually construct a sequence on the stack that we recognize as a right-hand side for the start symbol.
3
Let’s trace the operation of a shift-reduce parser in terms of its actions (shift or reduce) and its data structure (a stack). The chart below traces a parse of (id+id) using the previous example grammar:
PARSE
REMAINING
PARSER
STACK
INPUT
ACTION
(id + id)$
Shift (push next token from input on stack, advance input)
(
id + id)$
Shift
(id
+ id)$
Reduce: T –> id (pop right-hand side of production off stack, push left-hand side, no change in input)
(T
Reduce: E –> T
(E
+
id)$
+ id
)$
Reduce: T –> id
+ T
Reduce: E –> E + T (Ignore: E –> T)
(E)
$
Reduce: T –> (E)
Reduce: S –> E
In the above parse on step 7, we ignored the possibility of reducing E –> T because that would have created the sequence (E + E on the stack which is not a viable prefix of a right sentential form. Formally, viable prefixes are the set of prefixes of right sentential forms that can appear on the stack of a shift-reduce parser, i.e. prefixes of right sentential forms that do not extend past the end of the rightmost handle. Basically, a shift-reduce parser will only create sequences on the stack that can lead to an eventual reduction to the start symbol. Because there is no right-hand side that matches the sequence (E + E and no possible reduction that transforms it to such, this is a dead end and is not considered. Later, we will see how the parser can determine which reductions are valid in a particular situation.
As they were for top-down parsers, ambiguous grammars are problematic for bottom- up parsers because these grammars could yield more than one handle under some circumstances. These types of grammars create either shift-reduce or reduce-reduce conflicts. The former refers to a state where the parser cannot decide whether to shift or reduce. The latter refers to a state where the parser has more than one choice of production for reduction. An example of a shift-reduce conflict occurs with the if-then- else construct in programming languages. A typical production might be:
4
S –> if E then S | if E then S else S
Consider what would happen to a shift-reduce parser deriving this string:
if E then if E then S else S
At some point the parser's stack would have:
if E then if E then S
with else as the next token. It could reduce because the contents of the stack match the right-hand side of the first production or shift the else trying to build the right-hand side of the second production. Reducing would close off the inner if and thus associate the else with the outer if. Shifting would continue building and later reduce the inner if with the else. Either is syntactically valid given the grammar, but two different parse trees result, showing the ambiguity. This quandary is commonly referred to as the dangling else. Does an else appearing within a nested if statement belong to the inner or the outer? The C and Java languages agree that an else is associated with its nearest unclosed if. Other languages, such as Ada and Modula, avoid the ambiguity by requiring a closing endif delimiter.
Reduce-reduce conflicts are rare and usually indicate a problem in the grammar definition.
Now that we have general idea of how a shift-reduce parser operates, we will look at how it recognizes a handle, and how it decides which production to use in a reduction. To deal with these two issues, we will look at a specific shift-reduce implementation called LR parsing.
LR Parsing
LR parsers ("L" for left to right scan of input, "R" for rightmost derivation) are efficient, table-driven shift-reduce parsers. The class of grammars that can be parsed using LR methods is a proper superset of the class of grammars that can be parsed with predictive LL parsers. In fact, virtually all programming language constructs for which CFGs can be written can be parsed with LR techniques. As an added advantage, there is no need for lots of grammar rearrangement to make it acceptable for LR parsing the way that LL parsing requires.
The primary disadvantage is the amount of work it takes to build the tables by hand, which makes it infeasible to hand-code an LR parser for most grammars. Fortunately, there are LR parser generators that create the parser from an unambiguous CFG specification. The parser tool does all the tedious and complex work to build the necessary tables and can report any ambiguities or language constructs that interfere with the ability to parse it using LR techniques.
5
We begin by tracing how an LR parser works. Determining the handle to reduce in a sentential form depends on the sequence of tokens on the stack, not only the topmost ones that are to be reduced, but the context at which we are in the parse. Rather than reading and shifting tokens onto a stack, an LR parser pushes "states" onto the stack; these states describe what is on the stack so far. Think of each state as encoding the current left context. The state on top of the stack possibly augmented by peeking at a lookahead token enables us to figure out whether we have a handle to reduce, or whether we need to shift a new state on top of the stack for the next input token.
An LR parser uses two tables:
1. The action table Action[s,a] tells the parser what to do when the state on top of the stack is s and terminal a is the next input token. The possible actions are to shift a state onto the stack, to reduce the handle on top of the stack, to accept the input, or to report an error.
2. The goto table Goto[s,X] indicates the new state to place on top of the stack after a reduction of the nonterminal X while state s is on top of the stack.
The two tables are usually combined, with the action table specifying entries for terminals, and the goto table specifying entries for nonterminals.
LR Parser Tracing
We start with the initial state s 0 on the stack. The next input token is the terminal a and the current state is s t. The action of the parser is as follows:
• If Action[s t ,a] is shift, we push the specified state onto the stack. We then call yylex() to get the next token a from the input.
• If Action[s t ,a] is reduce Y –> X 1
X k then we pop k states off the stack (one for each
symbol in the right side of the production) leaving state s u on top. Goto[s u ,Y] gives a new state s V to push on the stack. The input token is still a (i.e., the input
remains unchanged).
• If Action[s t ,a] is accept then the parse is successful and we are done.
• If Action[s t ,a] is error (the table location is blank) then we have a syntax error. With the current top of stack and next input we can never arrive at a sentential form with a handle to reduce.
As an example, consider the following simplified expression grammar. The productions have been sequentially numbered so we can refer to them in the action table:
1)
E + T
6
2)
3)
4)
id
Here is the combined action and goto table. In the action columns sN means shift state numbered N onto the stack number and rN action means reduce using production numbered N. The goto column entries are the number of the new state to push onto the stack after reducing the specified nonterminal. This is an LR(0) table (more details on table construction will come in a minute).
State on
Action
Goto
top of
)
stack
0
s4
s3
1
s5
accept
r2
r4
8
s7
7
r3
r1
Here is a parse of id + (id) using the LR algorithm with the above action and goto table:
STATE
id + (id)$
Shift S 4 onto state stack, move ahead in input
0 S 4
+ (id)$
Reduce 4) T –> id, pop state stack, goto S 2 , input unchanged
0 S 2
Reduce 2) E –> T, goto S 1
0 S 1
Shift S 5
0 S 1 S 5
(id)$
Shift S 3
0 S 1 S 5 S 3
Shift S 4
0 S 1 S 5 S 3 S 4
Reduce 4) T –> id, goto S 2
0 S 1 S 5 S 3 S 2
Reduce 2) E –> T, goto S 6
0 S 1 S 5 S 3 S 6
Shift S 7
0 S 1 S 5 S 3 S 6 S 7
Reduce 3) T –> (E), goto S 8
0 S 1 S 5 S 8
Reduce 1) E –> E + T, goto S 1
Accept
LR Parser Types There are three main types of LR parsers: LR(k), simple LR(k), and lookahead LR(k) (abbreviated to LR(k), SLR(k), LALR(k))). The k identifies the number of tokens of lookahead. We will usually only concern ourselves with 0 or 1 tokens of lookahead, but the techniques do generalize to k > 1. The different classes of parsers all operate the same way (as shown above, being driven by their action and goto tables), but they differ in how their action and goto tables are constructed, and the size of those tables.
We will consider LR(0) parsing first, which is the simplest of all the LR parsing methods. It is also the weakest and although of theoretical importance, it is not used much in practice because of its limitations. LR(0) parses without using any lookahead at all. Adding just one token of lookahead to get LR(1) vastly increases the parsing power. Very few grammars can be parsed with LR(0), but most unambiguous CFGs can be parsed with LR(1). The drawback of adding the lookahead is that the algorithm becomes somewhat more complex and the parsing table gets much, much bigger. The full LR(1) parsing table for a typical programming language has many thousands of states compared to the few hundred needed for LR(0). A compromise in the middle is found in the two variants SLR(1) and LALR(1) which also use one token of lookahead but employ techniques to keep the table as small as LR(0). SLR(k) is an improvement over LR(0) but much weaker than full LR(k) in terms of the number of grammars for which it is applicable. LALR(k) parses a larger set of languages than SLR(k) but not quite as many as LR(k). LALR(1) is the method used by the yacc parser generator.
In order to begin to understand how LR parsers work, we need to delve into how their tables are derived. The tables contain all the information that drives the parser. As an example, we will show how to construct an LR(0) parsing table since they are the simplest and then discuss how to do SLR(1), LR(1), and LALR(1) in later handouts.
The essence of LR parsing is identifying a handle on the top of the stack that can be reduced. Recognizing a handle is actually easier than predicting a production was in top-down parsing. The weakness of LL(k) parsing techniques is that they must be able to predict which product to use, having seen only k symbols of the right-hand side. For LL(1), this means just one symbol has to tell all. In contrast, for an LR(k) grammar is able to postpone the decision until it has seen tokens corresponding to the entire right-hand side (plus k more tokens of lookahead). This doesn’t mean the task is trivial. More than one production may have the same right-hand side and what looks like a right-hand side may not really be because of its context. But in general, the fact that we see the entire right side before we have to commit to a production is a useful advantage.
Constructing LR(0) parsing tables
Generating an LR parsing table consists identifying the possible states and arranging the transitions among them. At the heart of the table construction is the notion of an LR(0)
configuration or item. A configuration is a production of the grammar with a dot at some position on its right side. For example, A –> XYZ has four possible items:
A –> •XYZ
A –> X•YZ
A –> XY•Z
A –> XYZ•
This dot marks how far we have gotten in parsing the production. Everything to the left of the dot has been shifted onto the parsing stack and next input token is in the First set of the symbol after the dot (or in the follow set if that symbol is nullable). A dot at the right end of a configuration indicates that we have that entire configuration on the stack i.e., we have a handle that we can reduce. A dot in the middle of the configuration indicates that to continue further, we need to shift a token that could start the symbol following the dot. For example, if we are currently in this position:
We want to shift something from First(Y) (something that matches the next input token). Say we have productions Y –> u | w. Given that, these three productions all correspond to the same state of the shift-reduce parser:
A
–> X•YZ
Y
–> •u
–> •w
At the above point in parsing, we have just recognized an X and expect the upcoming input to contain a sequence derivable from YZ. Examining the expansions for Y, we furthermore expect the sequence to be derivable from either u or w. We can put these three items into a set and call it a configurating set of the LR parser. The action of adding equivalent configurations to create a configurating set is called closure. Our parsing tables will have one state corresponding to each configurating set.
These configurating sets represent states that the parser can be in as it parses a string. Each state must contain all the items corresponding to each of the possible paths that are concurrently being explored at that point in the parse. We could model this as a finite automaton where we move from one state to another via transitions marked with a symbol of the CFG. For example:
Recall that we push states onto the stack in a LR parser. These states describe what is on the stack so far. The state on top of the stack (potentially combined with some lookahead) enables us to figure out whether we have a handle to reduce, or whether we need to read the next input token and shift a new state on top of the stack. We shift until
9
we reach a state where the dot is at the end of a production, at which point we reduce. This finite automaton is the basis for a LR parser: each time we perform a shift we are following a transition to a new state. Now for the formal rule for what to put in a configurating set. We start with a configuration:
–> X 1
X
i • X i+1
j
which we place in the configurating set. We then perform the closure operation on the items in the configurating set. For each item in the configurating set where the dot precedes a nonterminal, we add configurations derived from the productions defining that nonterminal with the dot at the start of the right side of those productions. So, if we have
X i+1
–> Y 1
g | Z 1
Z
h
in the above example, we would add the following to the configurating set.
i+1
–> • Y 1 –> • Z 1
g
We repeat this operation for all configurations in the configurating set where a dot precedes a nonterminal until no more configurations can be added. So, if Y 1 and Z 1 are terminals in the above example, we would just have the three productions in our configurating set. If they are nonterminals, we would need to add the Y 1 and Z 1 productions as well.
In summary, to create a configurating set for the starting configuration A –> •u, we follow the closure operation:
1. A –> •u is in the configurating set
2. If u begins with a terminal, we are done with this production
3. If u begins with a nonterminal B, add all productions with B on the left side, with the dot at the start of the right side: B –> •v
4. Repeat steps 2 and 3 for any productions added in step 3. Continue until you reach a fixed point.
The other information we need to build our tables is the transitions between configurating sets. For this, we define the successor function. Given a configurating set C and a grammar symbol X, the successor function computes the successor configurating set C' = successor(C,X). The successor function describes what set the parser moves to upon recognizing a given symbol.
10
The successor function is quite simple to compute. We take all the configurations in C where there is a dot preceding X, move the dot past X and put the new configurations in C', then we apply the closure operation to C'. The successor configurating set C' represents the state we move to when encountering symbol X in state C.
The successor function is defined to only recognize viable prefixes. There is a transition from A –> u•xv to A –> ux•v on the input x. If what was already being recognized as a viable prefix and we've just seen an x, then we can extend the prefix by adding this symbol without destroying viability.
Here is an example of building a configurating set, performing closure, and computing the successor function. Consider the following item from our example expression grammar:
E –> E •+ T
To obtain the successor configurating set on + we first put the following configuration in
C':
E –> E +•T
We then perform a closure on this set:
–> E +•T
–> •(E)
–> • id
Now, to create the action and goto tables, we need to construct all the configurating sets and successor functions for the expression grammar. At the highest level, we want to start with a configuration with a dot before the start symbol and move to a configuration with a dot after the start symbol. This represents shifting and reducing an entire sentence of the grammar. To do this, we need the start symbol to appear on the right side of a production. This may not happen in the grammar so we modify it. We create an augmented grammar by adding the production:
S' –> • S
where S is the start symbol. So we start with the initial configurating set C 0 which is the closure of S' –> •S. The augmented grammar for the example expression grammar:
0) E' –> E
1) E
–> E + T
2) E
–> T
3) T
–> (E)
4) T
–> id
We create the complete family F of configurating sets as follows:
11
1. Start with F containing the configurating set C 0 , derived from the configuration
S'
–> • S
2. For each configurating set C in F and each grammar symbol X such that
successor(C,X) is not empty, add successor(C,X) to F
3. Repeat step 2 until no more configurating sets can be added to F
Here is the full family of configurating sets for the grammar given above.
Configurating set
I
:
E' –> •E
–> •E+T
–> •T
–> •id
E' –> E•
–> E•+T
–> T•
T –> (•E)
–> id•
–> E+•T
–> (E•)
T –> (E)•
E –> E+T•
Successor
I 5
Reduce 2
Reduce 4
Reduce 3
Reduce 1
Note that the order of defining and numbering the sets is not important; what is important is that all the sets are included.
A useful means to visualize the configurating sets and successors is with a diagram like the one shown below. The transitions mark the successor relationship between sets. We call this a goto-graph or transition diagram.
12
To construct the LR(0) table, we use the following algorithm. The input is an augmented grammar G' and the output is the action/goto tables:
1. Construct F = {I 0 , I 1 ,
2. State i is determined from I i . The parsing actions for the state are determined as follows:
I n }, the collection of configurating sets for G'.
a) If A –> u• is in I i then set Action[i,a] to reduce A –> u for all input. (A not equal to
S').
b) If S' –> S• is in I i then set Action[i,$] to accept.
13
c) If A –> u•av is in I i and successor(I i , a) = I j , then set Action[i,a] to shift j (a is a terminal).
3. The goto transitions for state i are constructed for all nonterminals A using the rule:
If successor(I i, A) = I j, then Goto [i, A] = j.
4. All entries not defined by rules 2 and 3 are errors.
5. The initial state is the one constructed from the configurating set containing S' –>
•S.
Notice how the shifts in the action table and the goto table are just transitions to new states. The reductions are where we have a handle on the stack that we pop off and replace with the nonterminal for the handle; this occurs in the states where the • is at the end of a production.
At this point, we should go back and look at the parse of id + (id) from earlier in the handout and trace what the states mean. (Refer to the action and goto tables and the parse diagrammed on page 4 and 5).
Here is the parse (notice it is an reverse rightmost derivation, if you read from the bottom upwards, it is always the rightmost nonterminal that was operated on).
+ (id)
T –> id
E –> T
+ (T)
+ (E)
T –> (E)
E –> E+T
E' –> E
E'
Now let’s examine the action of the parser. We start by pushing s 0 on the stack. The first token we read is an id. In configurating set I 0 , the successor of id is set I 4, this means pushing s 4 onto the stack. This is a final state for id (the • is at the end of the production) so we reduce the production T –> id. We pop s 4 to match the id being reduced and we are back in state s 0 . We reduced the handle into a T, so we use the goto part of the table, and Goto[0, T] tells us to push s 2 on the stack. (In set I 0 , the successor for T was set I 2 ). In set I 2 , the action is to reduce E –> T, so we pop off the s 2 state and are back in s 0 . Goto[0, E] tells us to push s 1 . From set I 1 seeing a + takes us to set I 5 (push s 5 on the stack).
From set I 5 we read an open ( which that takes us to set I 3 (push s 3 on the stack). We
have an id coming up and so we shift state s 4 . Set 4 reduces T –> id, so we pop s 4 to remove right side and we are back in state s 3 . We use the goto table Goto[3, T] to get to set
I 2 . From here we reduce E –> T, pop s 2 to get back to state s 3 now we goto s 6
tells us to shift s 7 . Now in s 7 we reduce T –> (E). We pop the top three states off (one for each symbol in the right-hand side of the production being reduced) and we are back in
Action[6, )]
14
s 5 again. Goto[5,T] tells us to push s 8 . We reduce by E –> E + T which pops off three states to return to s 0 . Because we just reduced E we goto s1. The next input symbol is $ means we completed the production E' –> E and the parse is successful.
The stack allows us to keep track of what we have seen so far and what we are in the middle of processing. We shift states that represent the amalgamation of the possible options onto the stack until we reach the end of a production in one of the states. Then we reduce. After a reduce, states are popped off the stack to match the symbols of the matching right-side. What's left on the stack is what we have yet to process. Consider what happens when we try to parse id++. We start in s0 and do the same as above to reduce the id to T and then to E. Now we are in set I 5 and we encounter another +. This is an error because the action table is empty for that transition. There is no successor for + from that configurating set, because there is no viable prefix that begins
E++.
Subset construction and closure You may have noticed a similarity between subset construction and the closure operation. If you think back to a few lectures, we explored the subset construction algorithm for converting an NFA into a DFA. The basic idea was create new states that represent the non-determinism by grouping the possibilities that look the same at that stage and only diverging when you get more information. The same idea applies to creating the configurating sets for the grammar and the successor function for the transitions. We create a NFA whose states are all the different individual configurations. We put all the initial configurations into one start state. Then draw all the transitions from this state to the other states where all the other states have only one configuration each. This is the NFA we do subset construction on to convert into a DFA. Here is a simple example starting from the grammar consisting of strings with one or more a’s:
1) S' –> S 2) S –> Sa 3) S –> a
Close on the augmented production and put all those configurations in a set:
a
Do subset construction on the resulting NFA to get the configurating sets:
15
I0
I1
I3
I2
Interesting, isn't it, to see the parallels between the two processes? They both are grouping the possibilities into states that only diverge once we get further along and can be sure of which path to follow.
Limitations of LR(0) Parsing
The LR(0) method may appear to be a strategy for creating a parser that can handle any context-free grammar, but in fact, the grammars we used as examples in this handout were specifically selected to fit the criteria needed for LR(0) parsing. Remember that LR(0) means we are parsing with zero tokens of lookahead. The parser must be able to determine what action to take in each state without looking at any further input symbols, i.e. by only considering what the parsing stack contains so far. In an LR(0) table, each state must only shift or reduce. Thus an LR(0) configurating set cannot have both shift and reduce items, and can only have exactly one reduce item. This turns out to be a rather limiting constraint.
To be precise, a grammar is LR(0) if the following two conditions hold:
1. For any configurating set containing the item A –> u•xv there is no complete item B – > w• in that set. In the tables, this translates to no shift-reduce conflict on any state. This means the successor function from that set either shifts to a new state or reduces, but not both.
2. There is at most one complete item A –> u• in each configurating set. This translates to no reduce-reduce conflict on any state. The successor function has at most one reduction.
Very few grammars meet the requirements to be LR(0). For example, any grammar with an ε-rule will be problematic. If the grammar contains the production A –> ε, then the item A –> •ε will create a shift-reduce conflict if there is any other non-null production for A. ε-rules are fairly common programming language grammars, for example, for optional features such as type qualifiers or variable declarations.
Even modest extensions to earlier example grammar cause trouble. Suppose we extend it to allow array elements, by adding the production rule T–>id[E]. When we construct the configurating sets, we will have one containing the items T–>id• and T–>id•[E] which
16
will be a shift-reduce conflict. Or suppose we allow assignments by adding the productions E –> V = E and V –> id. One of the configurating sets for this grammar contains the items V–>id• and T–>id•, leading to a reduce-reduce conflict.
The above examples show that the LR(0) method is just too weak to be useful. This is caused by the fact that we try to decide what action to take only by considering what we have seen so far, without using any information about the upcoming input. By adding just a single token lookahead, we can vastly increase the power of the LR parsing technique and work around these conflicts. There are three ways to use a one token lookahead: SLR(1), LR(1) and LALR(1), each of which we will consider in turn in the next few lectures.
Bibliography
A. Aho, R. Sethi, J. Ullman, Compilers: Principles, Techniques, and Tools. Reading, MA:
Addison-Wesley, 1986.
J.P. Bennett, Introduction to Compiling Techniques. Berkshire, England: McGraw-Hill, 1990.
C.
Fischer, R. LeBlanc, Crafting a Compiler. Menlo Park, CA: Benjamin/Cummings, 1988.
D.
Grune, H. Bal, C. Jacobs, K. Langendoen, Modern Compiler Design. West Sussex, England: Wiley, 2000.
K.
Loudon, Compiler Construction. Boston, MA: PWS, 1997
A.
Pyster, Compiler Design and Construction. New York, NY: Van Nostrand Reinhold, 1988.
J. Tremblay, P. Sorenson, The Theory and Practice of Compiler Writing. New York, NY:
McGraw-Hill,. | https://it.scribd.com/document/208569528/Bottom-Up-Parsing | CC-MAIN-2020-45 | refinedweb | 6,017 | 67.79 |
generics error
karthik swamy
Ranch Hand
Posts: 45
karthik swamy
Ranch Hand
Posts: 45
Seetharaman Venkatasamy wrote:your *public T getData() * expect the generic return type T . but you have returned a string!
so change the method return type as String[preferable approach] else cast the string to T and return as in *return (T)s*;
<edit>added clarity</edit>
thanks its working,but can it wil work for integer type
karthik swamy
Ranch Hand
Posts: 45
Seetharaman Venkatasamy wrote:karthik swamy wrote:but can it wil work for integer type
actually *return (T)s* this lead you in confusion. that is why I said first approach is preferable. Yes , it will work for an Integer type also
but generics is used for auto typecast then why we are typecasting like return (t)s
| http://www.coderanch.com/t/548066/java/java/generics-error | CC-MAIN-2016-30 | refinedweb | 134 | 63.22 |
BOSTON - Microsoft's planned Atlas framework for AJAX (Asynchronous JavaScript and XML) faces difficulty with its development, but promises to be a top-notch offering for the trendy Web scripting technique, a moderator of a TechEd 2006 session said on Wednesday.
One particular feature, Update Panel, is beset with reliability issues, according to moderator Jeff Prosise, co-founder of Microsoft partner Wintellect, a .Net consulting firm. Update Panel is an Atlas control that makes it easy to do incremental page refreshes.
Some tough decisions will need to be made pertaining to changes to Update Panel, said Prosise, who has been made privy to Atlas development at Microsoft. He declined to be more specific about these decisions, except to say that Update Panel will definitely be included in Atlas and that programmers are working on the issue.
"There's some very smart people trying to get that thing to work right now," Prosise said.
"[Update Panel is] an incredible piece of code but it doesn't always work. It'll work most of the time," said Prosise. Update Panel is not as efficient as hand-coding, but hand-coding takes much longer to do, he said.
Seeking feedback from the packed room of about 200 people, Prosise said current plans call for Atlas to support several browsers: Internet Explorer 5 and higher, Mozilla Firefox, and Safari. The jury is still out on whether Opera will be supported. He asked if Opera support would be critical; a few people raised their hands.
"The Atlas team, I can tell you firsthand, is very serious about browser compatibility," said Prosise.
Atlas is by no means the first AJAX framework, but it has really clever code in it, he said. It features, among other things, improvements for working with JavaScript, but does not displace the scripting language with Microsoft's own technology.
"Another big concern is that Microsoft is somehow co-opting JavaScript. They're not co-opting JavaScript. They're not co-opting in any way," said Prosise.
Atlas is a framework for AJAX programming on ASP.Net 2.0. Featured in Atlas is a set of server-side controls that look and function like ASP.Net controls familiar to Windows programmers, Prosise said.
Atlas makes JavaScript look like C#, leveraging functions that JavaScript normally does not, such as classes, inheritance, and namespaces, said Prosise.
The C#-like functionality is beneficial to JavaScript, according to Prosise. "I don't know about you, [but] I hate JavaScript. It's a horrible language," he said. "It just feels like a toy" compared to what Prosise said he has used.
While it seems that ASP.Net and Atlas are in conflict because ASP.Net is server-based and Atlas pertains to the client, AJAX can feature HTTP calls back to the server, Prosise said. "In Microsoft's mind, there is no conflict," he said.
Atlas is due for release in the planned "Orcas" version of the Visual Studio developer platform, but Microsoft has set no firm ship date for that product. The company has said Orcas would ship some time after the planned January 2007 release of the Windows Vista operating system.
One plan under consideration for Atlas is enabling client calls from a Web service without requiring use of a Web server, said Prosise. | http://www.infoworld.com/d/developer-world/microsoft-ajax-framework-forges-ahead-in-spite-difficulties-947 | crawl-002 | refinedweb | 548 | 64.3 |
This demo shows you how to upload videos to VOD through the webpage. It builds two HTTP services based on SCF:
The system mainly involves four components: browser, API Gateway, SCF, and VOD. Here, API Gateway and SCF are the deployment objects of this demo as shown below:
The main business process is as follows:
Note:
The SCF code in the demo is developed based on Python 3.6. SCF also supports other programming languages such as Python 2.7, Node.js, Go, PHP, and Java for your choice as needed. For more information, please see Development Guide.
The VOD web upload demo (including the webpage code and service backend code) provided in this document is open-source and free of charge, but it may incur the following fees during service building and use:
The web upload demo is deployed on SCF with a service entry provided by API Gateway. To make it easier for you to build services, we provide a quick deployment script as detailed below.
The deployment script needs to be executed on a CVM instance meeting the following requirements:
Ubuntu Server 16.04.1 LTS 64-bitor
Ubuntu Server 18.04.1 LTS 64-bit.
For detailed directions on how to purchase a CVM instance and reinstall the system, please see Operation Guide - Creating Instances via CVM Purchase Page and Operation Guide - Reinstalling System, respectively.
Note:
- The web upload demo itself does not depend on CVM but only uses CVM to run the deployment script.
- If you do not have a CVM instance satisfying the above conditions, you can also run the script on another Linux (such as CentOS or Debian) or macOS server with public network access, but you need to modify certain commands in the deployment script based on the operating system. Please search for the specific modification method by yourself.
Please activate the VOD service as instructed in Getting Started - Step 1.
Your API key (i.e.,
SecretId and
SecretKey) and
APPID are required for deploying and running the web upload demo service.
APPIDon the Account Information page in the console as shown below:
Log in to the CVM instance prepared in step 1 as instructed in Logging In to Linux Instance in Standard Login Method and enter and run the following command on the remote terminal:
ubuntu@VM-69-2-ubuntu:~$ export SECRET_ID=AKxxxxxxxxxxxxxxxxxxxxxxx; export SECRET_KEY=xxxxxxxxxxxxxxxxxxxxx;export APPID=125xxxxxxx;git clone ~/vod-server-demo; bash ~/vod-server-demo/installer/web_upload_scf.sh
Note:
Please assign the corresponding values obtained in step 3 to
SECRET_ID,
SECRET_KEY, and
APPIDin the command.
This command will download the demo source code from GitHub and automatically run the installation script. The installation process will take several minutes (subject to the CVM network conditions), during which the remote terminal will print the following information:
[2020-04-25 23:03:20] Start installing pip3. [2020-04-25 23:03:23] pip3 is successfully installed. [2020-04-25 23:03:23] Start installing Tencent Cloud SCF. [2020-04-25 23:03:26] SCF is successfully installed. [2020-04-25 23:03:26] Start configuring SCF. [2020-04-25 23:03:28] SCF configuration is completed. [2020-04-25 23:03:28] Start deploying the VOD client upload client signature distribution service. [2020-04-25 23:03:40] The deployment of the VOD client upload signature distribution service is completed. [2020-04-25 23:03:44] Start deploying the VOD web upload page. [2020-04-25 23:03:53] The deployment of the VOD web upload page is completed. [2020-04-25 23:03:53] Please access the following address in your browser to use the demo:
Copy the address of the webpage in the output log (which is in this example).
Note:
If the following warning is displayed in the output log, it is generally because the CVM instance cannot immediately parse the service domain name deployed just now. You can ignore this warning.
[2020-04-25 17:18:44] Warning: the client upload signature distribution service failed the test.
fileId) and URLs of the uploaded video and cover will be displayed at the bottom of the page as shown below:
Note:
You can try out other features on the upload page as prompted.
Both the upload page and upload signature distribution functions use API Gateway to provide APIs. The specific API protocol is as detailed below:
You can access the SCF service list to view the details of the upload page service:
Note:
- The two SCF functions used by the demo are deployed under the namespace
vod_demoin the Guangzhou region.
- You need to select the corresponding region and namespace in the console to view the deployed SCF functions.
Click the function name, select Trigger Management on the left, and Access Path on the right is the URL of the upload page. Click API Service Name to redirect to the corresponding API Gateway page as shown below:
To test the service, directly access the page URL in a browser to check whether the upload page is displayed normally.
You can access the SCF service list to view the details of the upload signature distribution service in the same way as detailed in Upload page.
Click the function name, select Trigger Management on the left, and Access Path on the right is the URL of the service. Click API Service Name to redirect to the corresponding API Gateway page as shown below:
To test the service, manually send an HTTP request and run the following command on a Linux or macOS device with public network access (please modify the service URL according to the actual situation):
curl -d ''
If the service is normal, an upload signature will be returned. Below is a sample signature:
VYapc9EYdoZLzGx0CglRW4N6kuhzZWNyZXRJZD1BS0lEZk5xMzl6dG5tYW1tVzBMOXFvZERia25hUjdZa0xPM1UmY3VycmVudFRpbWVTdGFtcD0xNTg4NTg4MDIzJmV4cGlyZVRpbWU9MTU4ODU4ODYyMyZyYW5kb209MTUwNzc4JmNsYXNzSWQ9MCZvbmVUaW1lVmFsaWQ9MCZ2b2RTdWJBcHBJZD0w
You can also use third-party tools such as Postman to send HTTP requests. Please search for specific usage on the internet.
main_handler()is the entry function.
web_upload.htmlfile, which is the upload page content.
html_file = open(HTML_FILE, encoding='utf-8') html = html_file.read()
config.json, which refer to the content that you cannot predict when you write the SCF service and need to determine during the deployment process. The content is written into
config.jsonin real time by the deployment script before deploying the upload page service.
conf_file = open(CONF_FILE, encoding='utf-8') conf = conf_file.read() conf_json = json.loads(conf)
render_templateand modify the upload page content according to the configuration information obtained in the previous step. The configuration items are expressed in the format of
"variable name": "value"in the
config.jsonfile or in the format of
{variable name}in the
web_upload.htmlfile. When modifying them, please replace them with the specific values as detailed below.
def render_template(html, keys): """Replace the variables (in the format of `${variable name}`) in HTML with specific content.""" for key, value in keys.items(): html = html.replace("${" + key + "}", value) return html
return { "isBase64Encoded": False, "statusCode": 200, "headers": {'Content-Type': 'text/html'}, "body": html }
main_handler()is the entry function.
parse_conf_file()and read the configuration information from the
config.jsonfile. The configuration items are as described below (for specific parameters, please see Signature for Upload from Client):
parse_source_context()to parse the
sourceContextfield in the request body, which can be passed through to the event notification receipt service during video upload completion event notification (not used in this demo).
Note:
This field is optional during the upload process. If you don't need this feature, you can ignore this part of the code.
generate_sign()function to calculate the signature. For more information, please see Signature for Upload from Client.
return { "isBase64Encoded": False, "statusCode": 200, "headers": {"Content-Type": "text/plain; charset=utf-8", "Access-Control-Allow-Origin": "*", "Access-Control-Allow-Methods": "POST,OPTIONS"}, "body": str(signature, 'utf-8') }
Was this page helpful? | https://intl.cloud.tencent.com/document/product/266/39106 | CC-MAIN-2021-43 | refinedweb | 1,289 | 54.22 |
by Joe Mayo, 3/10/02, 9/19/04
HTTP is the primary transport mechanism for communicating with resources over
the World-Wide-Web. A developer will often want to obtain web pages for
different reasons to include: search engine page caching, obtaining info on a
particular page, or even implementing browser-like capabilities. To help
with this task, the .NET Framework includes classes that make this easy.
The HTTP classes in the .NET framework are HTTPWebRequest and HTTPWebResponse.
The steps involved require specifying a web page to get with a HTTPWebRequest
object, performing the actual request, and using a HTTPWebResponse object
to receive the page. Thereafter, you would use stream operations to
extract page information. Listing 1 demonstrates how this process works.
using System;
using System.IO;
using System.Net;
using System.Text;
/// <summary>
/// Fetches a Web Page
/// </summary>
class WebFetch
{
static void Main(string[] args)
{
// used to build entire input
StringBuilder sb = new StringBuilder();
// used on each read operation
byte[] buf = new byte[8192];
// prepare the web page we will be asking for
HttpWebRequest request = (HttpWebRequest)
WebRequest.Create("");
// execute the request
HttpWebResponse response = (HttpWebResponse)
request.GetResponse();
// we will read data via the response stream
Stream resStream = response.GetResponseStream();
string tempString = null;
int count = 0;
do
{
// fill the buffer with data
count = resStream.Read(buf, 0, buf.Length);
// make sure we read some data
if (count != 0)
{
// translate from bytes to ASCII text
tempString = Encoding.ASCII.GetString(buf, 0, count);
// continue building the string
sb.Append(tempString);
}
}
while (count > 0); // any more data to read?
// print out page source
Console.WriteLine(sb.ToString());
}
}
The program in Listing 1 will request the main page of a web site and
display the HTML on the console screen. Because the page data will be
returned in bytes, we set up a byte array, named buf, to hold
results. You'll see how this is used in a couple paragraphs.
The first step in getting a web page is to instantiate a HttpWebRequest object.
This occurs when invoking the static Create() method of the WebRequest
class. The parameter to the Create() method is a string
representing the URL of the web page you want. A similar overload of the Create()
method accepts a single Uri type instance. The Create() method
returns a WebRequest type, so we need to cast it to an HttpWebRequest
type before assigning it to the request variable. Here's the line
creating the request object:
// prepare the web page we will be asking for
HttpWebRequest request = (HttpWebRequest)
WebRequest.Create("");
Once you have the request object, use that to get a response object.
The response object is created by using the GetResponse() method
of the request object that was just created. The GetResponse()
method does not accept parameters and returns a WebResponse object which
must be cast to an HttpWebResponse type before we can assign it to the response
object. The following line shows how to obtain the HttpWebResponse
object.
// execute the request
HttpWebResponse response = (HttpWebResponse)
request.GetResponse();
The response object is used to obtain a Stream object, which is a
member of the System.IO namespace. The GetResponseStream() method
of the response instance is invoked to obtain this stream as follows:
// we will read data via the response stream
Stream resStream = response.GetResponseStream();
Remember the byte array we instantiated at the beginning of the algorithm?
Now we'll use it in the Read() method, of the stream we just got, to
retrieve the web page data. The Read() method accepts three
arguments: The first is the byte array to populate, second is the
beginning position to begin populating the array, and the third is the maximum
number of bytes to read. This method returns the actual number of bytes
that were read. Here's how the web page data is read:
// fill the buffer with data
count = resStream.Read(buf, 0, buf.Length);
We now have an array of bytes with the web page data in it. However, it is
a good idea to transform these bytes into a string. That way we can use
all the built-in string manipulation methods available with .NET. I chose
to use the static ASCII class of the Encoding class in the System.Text
namespace for this task. The ASCII class has a GetString() method
which accepts three arguments, similar to the Read() method we just
discussed. The first parameter is the byte array to read bytes from,
which we pass buf to. Second is the beginning position in buf
to begin reading. Third is the number of bytes in buf to
read. I passed count, which was the number of bytes returned from
the Read() method, as the third parameter, which ensures that only the
required number of bytes were read. Here's the code that translates bytes
in buf to a string and appends the results to a StringBuilder object.
// translate from bytes to ASCII text
tempString = Encoding.ASCII.GetString(buf, 0, count);
// continue building the string
sb.Append(tempString);
The buffer size is set at 8192, but that is only large enough to hold a small
web page. To get around this, the code that reads the response stream
must be wrapped in a loop that keeps reading until there isn't any more bytes
to return. Listing 1 uses a do loop because we have to make at
least one read. Recall that every read() returns a count of
items that were actually read. The while condition of the do
loop checks the count to make sure something was actually read. Also,
notice the if statement that makes sure we don't try to translate bytes when
nothing was read. Because we used a loop, we needed to collect the
results of each iteration, which is why we append the result of each iteration
to a StringBuilder.
The HttpWebRequest and HttpWebResponse classes from the .NET Base
Class Library make it easy to request web pages over the internet. The Httprequest
object identifies the Web page to get and contains a GetResponse() method
for obtaining a HttpWebResponse object. With a HttpWebResponse
object, we retrieve a stream to read bytes from. Iterating until all the
bytes of a Web page are read, translating bytes to strings, and holding the
string, makes it possible to obtain the entire Web page.
Your feedback is very important and I appreciate any constructive contributions
you have. Please feel free to contact me for any questions or comments you may
have about this article.
Feedback
I want to support this site. | http://csharp-station.com/HowTo/HttpWebFetch.aspx | crawl-002 | refinedweb | 1,086 | 65.32 |
I tried testing my simple sketch on page 1 of this thread. It works up to a point, but I must have the "bad" bootloader. Does anyone have a link to one that definitely fixes the watchdog issue? (For the Mega 2560).
drjiohnsmith: to answer an earlier comment, I like the mega, and I have the official IDE that comes from the arduino web site, as in here
I think you are not seeing or understanding the need for exactness.
Whether or not the Watchdog reset can be used with your board depends on what you actually have. More precisely what AVR chip is on your board and what bootloader is installed in that chip. And if what you have doesn't work, depending on what you have (board, IDE, and ISP programmer), you can use the IDE or tools that come with the IDE to modify/correct (update) a bootloader that won't work with watchdog reset to a new bootloader that will.
The information you provided above still does not answer the basic questions of what you actually have with respect to either s/w or h/w. Ok, so you "like" the "mega", but is that what you have? And if so, which "mega"? There is an Arduino "mega" that uses 1280 and one that uses a 2560. Or do you have some other AVR based Arduino?
And the s/w link you provided, actually has 27 different versions of the IDE that can be downloaded from that page. While the latest AVR based s/w version is at the top, that version changes through time so depending on when you downloaded the "latest" IDE it can be different versions. The reason that all this is very important is that in the past year there have been some pretty big changes to the IDE, and the AVR based bootloaders that ship with the Arduino s/w.
Some of these changes affect whether or not watchdog reset works and how to update a bootloader that doesn't work with one that does.
--- bill
drjiohnsmith:
would be nice if there was an example I could use, same as we have examples for things like lcd’s in the ide.
The exact code I posted on page 1 of this thread works, and does pretty-much what you ask. Here it is again:
#include <avr/wdt.h> void setup () { Serial.begin (115200); Serial.println ("Restarted."); wdt_enable (WDTO_1S); // reset after one second, if no "pat the dog" received } // end of setup void loop () { Serial.println ("Entered loop ..."); wdt_reset (); // give me another second to do stuff (pat the dog) while (true) ; // oops, went into a loop } // end of loop
Tested on the Mega2560 board once I replaced the bootloader with this working one:
Output:
Restarted. Entered loop ... Restarted. Entered loop ... Restarted. Entered loop ...
Here is an example sketch that shows how to use watchdog reset as a way to
intentionally reset the board.
While having a way to reset the board under software control can be useful, this example
unlike Nick’s example, is not a good example of what watchdog reset is often/normally used for.
When using watchdog in the normal way - as in Nick’s example,
best programming practice is NOT to do the wdt_reset() in an ISR but in a main loop
as his example shows, since doing it in an ISR will prevent a WDT reset from happening
when the foreground code is stuck in an an unintended loop - which is the entire point of using WDT reset.
— bill
/* *.... }
thank you guys,
I uploaded these examples in my arduino mega 2560 and was a fight to get remove them.
FernandoGarcia: I uploaded these examples in my arduino mega 2560 and was a fight to get remove them.
So you got to see first hand what happens when a bootloader doesn't properly initialize the WDT registers after a watchdog reset. Given it is such a simple/easy fix to the bootloader, I don't understand why the Arduino team doesn't ship an updated bootloader to fix this.
--- bill
I’ve provided a link above to the fixed one. Replace the file in your current installation, and do an “burn bootloader”.
I’m not sure if the fixed one is the one that ships with the IDE, it should be, one would think.
I don’t support your words.
If the delay() is used, the watchdog will reset all time.
Reset button don’t work. I have to power off.
After much searching the web I finally found a optiboot version for the mega1280 board. Tested on two different mega1280 boards a seeeduino and a arduino mega1280. First the the new boards.txt entry to support the new bootloader:
##############################################################
megao.name=Arduino Mega1280 Optiboot
megao.upload.protocol=arduino
megao.upload.maximum_size=130048
megao.upload.speed=115200
megao.bootloader.low_fuses=0xff
megao.bootloader.high_fuses=0xdc
And the optiboot hex file optiboot_atmega1280.hex
I was able to burn this bootloader using the arduinoISP sketch from IDE 1.0.3, but not using my USBtiny hardware programmer, as it does not work with flash sizes >64KB in size.
And finally a sketch from a poster here on this forum (forgot name, sorry whoever) to test the ability to handle a very short 15 millisec WDT interrupt timeout. Works with Uno but
will ‘brick’ mega boards with ‘stock’ bootloaders.
WDT_test.ino
//); } }
Lefty
stevemeng: I don't support your words. If the delay() is used, the watchdog will reset all time. Reset button don't work. I have to power off.
It's a bug in the bootloader, not the sketch. If you change the fuse to load the sketch (and bypass the bootloader) it will work correctly. Or, better, get a bootloader that handles the WDT correctly.
Regarding " Tested on the Mega2560 board once I replaced the bootloader with this working one: "
I found another bootloader (which seems to be the one installed with the Arduino IDE) under bootloaders/stk500v2/stk500boot_v2_mega2560.hex
It has the same file name as "this working one", but its size is 103kB versus 21kB for the other. Obviously, these are different bootloaders, and should have a different version, means e.g. stk500boot_v3_mega2560.hex for the working one. Besides of this, what are the differences between the two, other than watchdog support?
Thanks Nick to have solved the problem for the Mega2560 (now only the correctly working bootloader should be delivered with new Mega 2560 and new versions of the IDE).
Meanwhile, I have ordered the new Arduino Due. Has anyone yet successfully used watchdog with this one? Means, does the standard Due bootloader support watchdog, and what are the equivalent lines of code for the SAM3XE8 in the Arduino IDE?
I don't know, I suggest you post this question in the Due part of the forum.
Obviously, these are different bootloaders, and should have a different version, means e.g. stk500boot_v3_mega2560.hex for the working one.
"Stk500v2" is the name of the protocol supported by this bootloader, the v2 is NOT the version of bootloader itself. I don't think that there is a separate version number for the bootloader. Although that WOULD be a good idea. (Hmm. It does have the date that the code was compiled:
Bootloader>? CPU stats Arduino explorer stk500V2 by MLS Compiled on = Jan 28 2013
)
but its size is 103kB versus 21kB for the other.
Um? Not that I could see. shows as "514 lines (513 sloc) 22.989 kb" While the newer code at shows as "file 469 lines (468 sloc) 20.964 kb " Perhaps you were comparing HTML pretty-printed web page against 'raw' file size?
Embed: It has the same file name as "this working one", but its size is 103kB versus 21kB for the other.
We have to take that with a grain of salt, as the maximum size of the bootloader on the Mega2560 is 8K bytes (see datasheet, page 330).
The idea that a chip with 256 Kb of program memory would have a 103 Kb bootloader is, if I may say, laughable.
Well, size of the .hex file will always be somewhat more than twice the size of the actual code... 20k is a reasonable .hex file size for an 8k bootloader.
hello, I need the strech by bluetooth shit, and I get it, but I must be attentive to press the reset arduino-one.
when bleutooch connects, there squeezed reset and this loads the strech.
I tried to use the wacthdog, when this is connected to a reset,
but I can not load the strech, someone has done something similar to what
comment?
#include <avr/wdt.h> int led = 9; // the pin that the LED is attached to int brightness = 0; // how bright the LED is int fadeAmount = 5; // how many points to fade the LED by void setup () { Serial.begin (115200); pinMode(led, OUTPUT); // Serial.println ("Restarted."); while(!Serial){ ; } wdt_reset (); wdt_disable(); } // end of setup void loop () {); } // end of loop
Nick, re: Regarding " Tested on the Mega2560 board once I replaced the bootloader with this working one:"
Do you know if this is the bootloader in the Arduino 1.04 IDE release?
I've been battling dropped ethernet connection on my webserver application for weeks. It occurs anywhere from hours to days. The application continues to run, but over time, a client will not be able to connect. I have a Mega2560 and want to implement watchdog timer based on your example. The code verifies, but I haven't uploaded it yet to test based on the comments on this post (the need to replace the bootloader)
Is your recommendation of the bootloader you specified still valid? Any help is appreciated.
Thanks, Rich
There seem to be two booloader files shipping with 1.04 namely:
// File = Mega2560-prod-firmware-2011-06-29.hex // Loader start: 3E000, length: 8192 // MD5 sum = 1E 35 14 08 1F 65 7F 8C 96 50 69 9F 19 1E 3D F0 // File = stk500boot_v2_mega2560.hex // Loader start: 3E000, length: 8192 // MD5 sum = D9 E6 6B 4E D1 A6 11 2C 61 8F 9B D5 5D 24 E2 13
However neither has the MD5 sum of the one that I found to work namely:
// File = stk500boot_v2_mega2560_fixes_watchdog_problem.hex // Loader start: 3E000, length: 8192 // MD5 sum = 8A F4 7A 29 43 A0 D8 7C DB ED 09 A3 8F 40 24 1E
I would still recommend the "fixed" one from: | https://forum.arduino.cc/t/watchdog-in-arduino-library-or-at-least-support-by-bootloader/126068?page=2 | CC-MAIN-2022-05 | refinedweb | 1,740 | 71.95 |
Hi, All.
I've put together a plugin for interacting with a SimpleNote account (simplenoteapp.com/).
username:[email protected]
password:mypassword
working directory:c:\\temp
editor extension: txt
It's available on bitbucket at bitbucket.org/stevecooperorg/simplenote/src
Enjoy.
Nice!
Hi, I'm super keen to get this plugin working, but havent had much luck.
I'm not getting any errors in the console when I load sublime, but the 'List Simplenotes' option under tools is greyed out. I've formatted/located the api-key file correctly...
Are there some OS/python/simplenote account requirements that I'm missing? Perhaps I should try hardcoding my account details, or using the first version from the repository?
Any help would be greatly appreciated I am a massive fan of both Sublime and simplenote.
EDIT: seems to be based around python being unable to import the json module, ie:
import jsonImportError: No module named json
Odd though - im runnng python 2.6.4; and running test_json.py comes back positive
The json module only appeared in Python 2.6, which means you'll need the beta version (sublimetext.com/beta) of Sublime Text for it to work: Sublime Text 1.3 is still using Python 2.5.
Ah, of course. Thanks so much for your help, it works great!
@ulterior -- glad you got it working. Let me know how it goes.
@jps -- thanks for helping, Jon!
Any way to get this working from behind a corporate proxy?
By the way, the plugin doesn't handle the missing configuration file very well.
Otherwise, I'm curious to see this working...
@jbjornson -- I've updated the plugin so that there is now a new command in the package menu; you can now click through "Tools | Packages | Simplenote | Edit Simplenote Config File" and it will open the config file. If that doesn't yet exist, it will be created and it's up to you to fill in the username, password, and working directory. After changes, you'll need to restart ST to have them take effect. Hopefully that makes it easier to get the file in the right place and in the right format.
What's happening on your corporate proxy? I don't think there's anything going on except standard web traffic, albeit over HTTPS. The URLs used all look something like
So you may want to make sure traffic to this site, and over HTTPS, is allowed.
I did eventually get the config file in the right location but what I meant before is that the error handling wasn't very graceful when the config file isn't there. I'm sure the config file editor will be helpful when setting up the plugin. Thanks.
The corporate proxy requires that all internet traffic going outside of the intranet is directed through a proxy that is authenticated via NTLM. The connection to to the http/https site has to be done through the proxy. All traffic to the "outside world" is blocked unless it goes through the proxy. I guess there should be some way to define the proxy url and port for the connection that you are making to the simple-note api on...
@jbjornson;
I'm not sure how to proceed with this, really -- this is outside my experience and I don't have a proxy to try it with. If you wanted to try to write a patch, I could suggest a few places in the code you'd want to edit. Here's what I got so far;
in simplenote.py, changeset 2 (3b7dacd23edc);
on line 1, start using urllib2, which supports proxies;
-from urllib import urlopen
+from urllib2 import urlopen
line 385; add proxy option loading code like;
if 'proxy url' in d:
self.proxy= d'proxy url']
which will add those properties to the setup object. Then at the end of Load(), if the proxy was specified, add your authentication code;
from;
#)
I've not tried this at all, but I thought I'd dump all the thinking I've done here. Let me know how it goes; if you do get it to work, post the patch here and I'll update.
You can also try this, although I don't know how it works with authentication:
import os
os.environ'https_proxy'] = ''
os.environ'http_proxy'] = ''
Hi,
is anybody still using this plugin with a recent version?
I've put all files in a directory "simplenote" into the packages folder. Sublime can't parse the Default.sublime-keymap:1:1
I guess the syntax for key bindings has changed at some point!?
But even without key bindings shouldn't there be any simplenote entries in the command palette?
From the age of the posts & the repository (2010 and earlier), my guess is that this plugin was written for Sublime Text 1.
I don't know what all the cool kinds are doing these days, but I'm warming up to the idea of using SimpleNote to sync notes across devices (rather than vanilla Dropbox). But I am loathe to write in anything other than Sublime. Am I the only one in this boat? Or are there other people interested in ressurecting this plugin? (I'm not offering, incidentally )
Nice thing. Thanks! | https://forum.sublimetext.com/t/simplenote-plugin/915/1 | CC-MAIN-2017-43 | refinedweb | 876 | 73.27 |
20 September 2012 11:36 [Source: ICIS news]
SINGAPORE (ICIS)--?xml:namespace>
India’s domestic toluene prices rose by Indian rupees (Rs) 2.00/kg ($0.04/kg) from the previous day to Rs84.00-85.00/kg ex-tank on 20 September, while Brent crude futures fell to $107/bbl from $116/bbl in a span of three days, according to ICIS data.
“The market is going crazy, it is just chaotic. People having cargoes on their hands are just pushing the prices up very, very high,” an India-based importer of toluene said.
While prices in the regional FOB
Normal inventory levels of toluene cargoes at the Kandla ports are around 15,000 tonnes, while the current inventory level is estimated to be only 3,000-3,500 tonnes, said the source.
However, market participants said that this situation was temporary as another 6,000-8,000 tonnes of toluene are expected to reach
($1 = Rs53 | http://www.icis.com/Articles/2012/09/20/9597083/Indias-toluene-prices-rise-further-on-critical-cargoes.html | CC-MAIN-2014-35 | refinedweb | 158 | 62.78 |
When I run locally.
for this same input 1534236469 I am getting 0 but when its run on leetCode it says its getting output as 9646324351
def reverse(self, x): max = 2147483647 k=[] val = 0 reminder = mod = abs(x) while(mod >= 1 or reminder >= 1): mod = reminder % 10 reminder = int(reminder / 10) if(mod != reminder): k.append(mod) val = ''.join(str(i) for i in k) if(x < 0): val = "-"+str(val) if(val == ''): val = 0 m = int(val) if(m > max): print(int(0)) else: print(m) return m
You're probably getting 9646324351 locally as well. Ignore your debugging prints, the true output and the only thing that matters is the return value.
I got same issue with you.
Input: 1534236469 Output: 9646324351 Expected: 0
then i checked the spoilers with this line:
Did you notice that the reversed integer might overflow? Assume the input is a 32-bit integer, then the reverse of 1000000003 overflows. How should you handle such cases?
then AC
class Solution: # @param {integer} x # @return {integer} def reverse(self, x): flag = False if x < 0: flag = True elif x == 0: return 0 min = -2147483648 max = 2147483647 s = str(abs(x))[::-1] for i in range(len(s)): if s[i] != '0': r = 0-int(s[i::]) if flag == True else int(s[i::]) if r > max or r < min: return 0 else: return r | https://discuss.leetcode.com/topic/20465/my-solution-not-getting-accepted-for-reverse-integer | CC-MAIN-2018-05 | refinedweb | 231 | 68.1 |
Hi! I am getting this Error: expected primary expression and ; before else on line 77. For the life of me I can't see it. Can anyone please help me out?
Code:/* IF Write a program that will calculate a customer's purchase amount for movie tickets. The user should be prompted for the amount of adult movie tickets they would like to purchase, the amount of child movie tickets that they would like to purchase, and whether or not they have a discount coupon. If the user has a discount coupon, determine if it is for an adult movie ticket ("adult") or for child movie ticket ("child"). Adult movie tickets cost 10.50 each, while child movie tickets cost 5.00 each. The discount coupon (if any) should be subtracted from the purchase amount. Calculate and display the overall purchase amount with two digits after the decimal point. */ #include <iostream> #include <iomanip> #include <string> using namespace std; int main() { cout << fixed << setprecision(0) << setw(3) << endl; float numAdtTix, numChdTix, TotPurchase, DiscountCoup = 0; string haveDiscount, DiscountType; cout << setprecision(2) << fixed << endl; //get the number of adult movie tickets cout << "Enter the number of adult movie tickets that are being purchased: "; cin >> numAdtTix; //get the number of children's movie tickets cout << "Enter the amount of child tickets that are being purchased: "; cin >> numChdTix; //ask if the user has a discount coupon //if the user has a discount coupon // ask the user for the discount coupon type // if the discount coupon is "adult" // discount coupon is 10.50 // else // if the discount coupon is "child" // discount coupon is 5.00 // else // error message for bad discout type // endif // endif //endif cout << "Do you have a discount coupon (Y for yes, N for no)? "; cin >> haveDiscount; if ( haveDiscount == "Y" ) { haveDiscount = "Y"; cout << "Is this discount for an adult or child's ticket (A for adult or C for child)? "; cin >> DiscountType; } if ( DiscountType == "A" ) { DiscountCoup = 10.50; cout << "Discount: " << DiscountCoup << endl; } else { if ( DiscountType == "C" ) { DiscountCoup = 5.00; cout << "Discount: " << DiscountCoup << endl; } else { cout << DiscountCoup << "is an invalid coupon type" << endl; } } else { ( haveDiscount == "N" ) haveDiscount = "N"; cout << "Total Purchase: " << TotPurchase << endl; } //calculate the purchase amount TotPurchase = (numAdtTix * 10.50) + (numChdTix * 5.00) - DiscountCoup; return 0; } | http://cboard.cprogramming.com/cplusplus-programming/150749-error-expected-primary-expression-;-before-else.html | CC-MAIN-2016-30 | refinedweb | 373 | 69.31 |
The loop vectorizer optimizes loops containing conditional memory accesses by generating masked load and store intrinsics.
This decision is target dependent.
I already submitted the codegen changes for the intrinsics.
The loop vectorizer optimizes loops containing conditional memory accesses by generating masked load and store intrinsics.
This decision is target dependent.
I already submitted the codegen changes for the intrinsics.
Hi Elena,
Thank you for working on this.
+ bool canPredicateStore(Type *DataType, Value *Ptr) {
+ return TTI->isLegalPredicatedStore(DataType, isConsecutivePtr(Ptr));
+ }
+ bool canPredicateLoad(Type *DataType, Value *Ptr) {
+ return TTI->isLegalPredicatedLoad(DataType, isConsecutivePtr(Ptr));
+ }
+ bool setMaskedOp(const Instruction* I) {
+ return (MaskedOp.find(I) != MaskedOp.end());
+ }
private:
Can you please document these functions? The name setMaskedOp is confusing and Doxygen style comments could be useful here.
Thanks,
Nadav
Hi Elena,
Please see a question from me in inline comments. And thanks for doing this!
Michael
Thank you all for reviewing this. I addressed all comments and applying the updated patch.
+ / Returns true if vector representation of the instruction \p I
+ / requires mask.
+ bool toBuildMaskedVectorInst(const Instruction* I) {
+ return (MaskedOp.count(I) != 0);
+ }
+ void SI();
Maybe a better name would be "isMaskRequired"?
To Michael:
Some tests fail in this case, I tried. Because NumberOfStoresToPredicate allows you to predicated some stores without masking.
I wanted to keep this knob as is.
I also do not check mayThrow() because it is for calls only.
To Nadav:
Maybe a better name would be "isMaskRequired"?
No problem
Michael is right about potentially trapping constant expression. I addressed this issue and also added a test for it.
Some changes in the patch should go in separately. For example:
+ return false;
} }
This is a whitespace fix. Just commit it as is.
+ assert(MaxVectorSize <= 64 && "Did not expect to pack so many elements"
" into one vector!");
Also the assert change can go in.
+ void SI();
private:
What is SI() ??
I would also split the TTI changes into a separate patch. It can probably go in already.
Hi Elena,
Thank you for checking-in the first (small) part of the changes.
The code looks good to me except some minor nits (see inline), but I'd wait for a LGTM from Nadav or Arnold. | https://reviews.llvm.org/D6527?id= | CC-MAIN-2019-47 | refinedweb | 362 | 52.76 |
This page is a snapshot from the LWG issues list, see the Library Active Issues List for more information and the meaning of New status.
Section: 20.10.6 [ptr.align] Status: New Submitter: Melissa Mears Opened: 2014-08-06 Last modified: 2014-11-03
Priority: 3
View all other issues in [ptr.align].
View all issues with New status.
Discussion:
The specification of std::align does not appear to specify what happens when the value of the size parameter is 0. (The question of what happens when alignment is 0 is mentioned in another Defect Report, 2377; it would change the behavior to be undefined rather than potentially implementation-defined.)The case of size being 0 is interesting because the result is ambiguous. Consider the following code's output:
#include <cstdio> #include <memory> int main() { alignas(8) char buffer[8]; void *ptr = &buffer[1]; std::size_t space = sizeof(buffer) - sizeof(char[1]); void *result = std::align(8, 0, ptr, space); std::printf("%d %td\n", !!result, result ? (static_cast<char*>(result) - buffer) : std::ptrdiff_t(-1)); }
There are four straightforward answers as to what the behavior of std::align with size 0 should be:
The behavior is undefined because the size is invalid.
The behavior is implementation-defined. This seems to be the status quo, with current implementations using #3.
Act the same as size == 1, except that if size == 1 would fail but would be defined and succeed if space were exactly 1 larger, the result is a pointer to the byte past the end of the ptr buffer. That is, the "aligned" version of a 0-byte object can be one past the end of an allocation. Such pointers are, of course, valid when not dereferenced (and a "0-byte object" shouldn't be), but whether that is desired is not specified in the Standard's definition of std::align, it appears. The output of the code sample is "1 8" in this case.
Act the same as size == 1; this means that returning "one past the end" is not a possible result. In this case, the code sample's output is "0 -1".
The two compilers I could get working with std::align, Visual Studio 2013 and Clang 3.4, implement #3. (Change %td to %Id on Visual Studio 2013 and earlier. 2014 and later will have %td.)
Proposed resolution: | https://cplusplus.github.io/LWG/issue2421 | CC-MAIN-2019-51 | refinedweb | 392 | 61.97 |
Walkthrough: Embedding a JavaScript File as a Resource in an Assembly
In this walkthrough, you will include a JavaScript file as an embedded resource in an assembly. You embed a JavaScript file when you have a client-script component that must be distributed with an assembly that you have created. For example, you might create a custom ASP.NET server control that uses JavaScript files to implement AJAX functionality for ASP.NET. You can embed the JavaScript files in the assembly, and they can then be referenced from a Web application that registers the assembly.
To begin, you will create a file that contains the JavaScript code that you want to embed in the assembly.
To embed a client script file in an assembly
In Visual Studio, create a new class library project named SampleControl.
Add references to the System.Web, System.Drawing, and System.Web.Extensions assemblies to the project.
Add a new JScript file named UpdatePanelAnimation.js to the project.
Add the following code to the UpdatePanelAnimation.js file:
The code contains a JavaScript function that temporarily displays a border based on the provided color around an UpdatePanel control.
In the Properties window for the UpdatePanelAnimation.js file, set Build Action to Embedded Resource.
Add a class file named CustomControl to the project.
Replace any code in the CustomControl file with the following code:); } } } }
This class contains properties for customizing the border that is displayed around the UpdatePanel control. The code also registers JavaScript code to use in a Web page. The code creates a handler for the load event of the Sys.Application object. The animate function in the UpdatePanelAnimation.js file is called when a partial-page update is processed.
Add the following line to the AssemblyInfo file.
The WebResource definition must include the default namespace of the assembly and the name of the .js file.
Build the project.
When compilation finishes, you will have an assembly named SampleControl.dll. The JavaScript code in the UpdatePanelAnimation.js file is embedded in this assembly as a resource.
You can now reference the embedded script file in a Web application.
To reference the embedded script file
In Visual Studio, create a new AJAX-enabled Web site.
Create a Bin folder in the root directory of the Web site.
Copy SampleControl.dll from the Bin\Debug or Bin\Release directory of the class library project to the Bin folder of the Web site.
Replace the code in the Default.aspx file with the following code:
<%@>
This code includes a ScriptReference control that references the assembly and the name of the .js file that you created in the previous procedure. The name of the .js file includes a prefix that references the default namespace of the assembly.
Run the project, and in the page, click dates in the calendar.
Every time that you click a date in the calendar, you see a green border around the UpdatePanel control.
This walkthrough showed you how to embed a JavaScript file as a resource in an assembly. The embedded script file can be accessed in a Web application that contains the assembly.
The next step is to learn how to embed localized resources in an assembly for use in client script. For more information, see Walkthrough: Embedding Localized Resources for a JavaScript File. | http://msdn.microsoft.com/en-us/library/bb398930 | CC-MAIN-2014-10 | refinedweb | 550 | 58.99 |
Hello i need help is it possible to make a program where you choose gregorian and the output is julian and vise versa in about 4 lines?
please i need help i dont know how to start it
I have this but its wrong and i dont know exactly what the output is because it has a problem.
Code :
package calendar; public static void main(String[] args) { public class Calendar private static double julianDate(int year, int month, int day) { if (month <= 2) { year -= 1; month += 12; } double A = Math.floor(year / 100.0); double B = 2 - A + Math.floor(A / 4.0); double JD = Math.floor(365.25 * (year + 4716)) + Math.floor(30.6001 * (month + 1)) + day + B - 1524.5; return JD; } } | http://www.javaprogrammingforums.com/%20java-theory-questions/22797-gregorian-julian-vise-versa-printingthethread.html | CC-MAIN-2015-27 | refinedweb | 123 | 74.69 |
regexec()
Compare a string with a compiled regular expression
Synopsis:
#include <regex.h> int regexec( const regex_t * preg, const char * string, size_t nmatch, regmatch_t * pmatch, int eflags );
Since:
BlackBerry 10.0.0
Arguments:
- preg
- A pointer to the regex_t object for the regular expression that you want to execute. You must have compiled the expression by calling regcomp().
- string
- The string that you want to match against the regular expression.
- nmatch
- The maximum number of matches to record in pmatch.
- pmatch
- An array of regmatch_t objects where the function can record the matches; see below.
- eflags
- Execution parameters to regexec(). For example, you may need to call regexec() multiple times if the line you're processing is too large to fit into string. The eflags argument is the bitwise inclusive OR of zero or more of the following flags:
- REG_NOTBOL — the string argument doesn't point to the beginning of a line.
- REG_NOTEOL — the end of string isn't the end of a line.
Library:
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
This function is in libc.a, but not in libc.so (in order to save space).
Description::
- rm_so
- The byte offset from the beginning of the string to the beginning of the matched substring.
- rm_eo
- One greater than the offset from the beginning of the string to the end of the matched substring.
The offsets in pmatch[0] identify the substring corresponding to the entire expression, while those in pmatch[1...nmatch] identify up to the first nmatch subexpressions. Unused elements of the pmatch array are set to -1.
You can disable the recording of substrings by either specifying REG_NOSUB in regcomp(), or by setting nmatch to zero.
Returns:
- 0
- The string argument matches preg.
- <>0
- A match wasn't found, or an error occurred (use regerror() to get an explanation).
Classification:
Contributing author:
Henry Spencer. For license information, see the Third Party License Terms List at .
Last modified: 2014-06-24
Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus | http://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/r/regexec.html | CC-MAIN-2018-17 | refinedweb | 352 | 68.16 |
»
Java in General
Author
Cast Object back to its original type
Andy Nimmo
Greenhorn
Joined: Mar 30, 2003
Posts: 14
posted
Feb 19, 2004 09:01:00
0
I have a method to which I expect have one of two possible Classes sent to at any given time.
I could just create two forms of this method however this is something I come across on a regular basis and have always wanted to know if it is possible to do this.
I have a class which I cast to type Object then pass it to a method call. In the method body I want to cast this object back to it's previous type.
For example:
ContactsList -> Object -> saveToFile( theObject )
theObject -> ContactsList
Hope this makes sense.
Can anyone help? I've had a look at
Java
Reflections but I'm having a bad day and I'm unable to decipher what's in there and apply it to my situation.
All help is greatly appreciated.
Cheers,
Andy.
*****************************************************
Here I Stand, Alone, Left To Face The Coming Darkness
Bring it on!!
*****************************************************
[ February 19, 2004: Message edited by: Andy Nimmo ]
Stan James
(instanceof Sidekick)
Ranch Hand
Joined: Jan 29, 2003
Posts: 8791
posted
Feb 19, 2004 10:20:00
0
One way to solve this is to make your two possible classes "extend" the same base class or "implement" a common interface. Then if you can call methods defined on the base class to work on them, you won't ever have to know which one you get.
public interface Pet { public void speak(); } public class Dog implements Pet { public void speak() { System.out.println("Arf!"); } } public class Cat implements Pet { public void speak() { System.out.println("Meow!"); } } public class MyClass // ran out of cute names { public void makePetSpeak( Pet pet ) { pet.speak(); } }
Now if somebody calls MyClass with either a Cat or Dog we'll get the correct sound. Holler if that didn't make sense.
But what if you really can't make Cat & Dog implement the same interface? Maybe you bought one from Acme and the other from Best? You do something more like this:
import acme.Dog; import best.Cat; public class MyClass { public void speak( Object pet ) // <- finally, exactly what you asked about! { if ( pet instanceOf( Dog ) ) { Dog dog = (Dog)pet; dog.bark(); } if ( pet instanceOf( Cat ) ) { ((Cat)pet).meow(); } } }
We have to find out exactly what we're dealing with, cast it to the real class and call a method. Note that acme and best didn't agree on having a speak() method. With Dog I made a local variable for clarity. With Cat I cast and called the method all in one line. If that's too many parens in one line for your eyes, do it the other way.
For your original question, you were just missing instanceof. But the polymorphic solution using extends or implements makes life a lot easier!
[ February 19,
Wayne L Johnson
Ranch Hand
Joined: Sep 03, 2003
Posts: 399
posted
Feb 19, 2004 10:24:00
0
You can use the "instanceof" operator to determine the actual class of an object instance. For example, if your method was expecting either a "String" or a "java.util.Date", you could do something like:
... public void myMethod(Object obj) { if (obj instanceof String) { String strVal = (String)obj; // do whatever ... } else if (obj instanceof java.util.Date) { java.util.Date dateVal = (java.util.Date)obj; // do something else ... } else { throw new IllegalArgumentException("..."); } }
However it's much better to simply use overloading:
... public void myMethod(String strObj) { // do what you want with STRING ... } ... public void myMethod(java.util.Date dateObj) { // do what you want with DATE ... }
If there is some common code, you can factor that out into a third method, make it "private", and invoke it from the two other methods.
Later if you need to add a third object type, simply create another method.
I think this way is easier to read, easier to maintain, and allows you to catch things at compile time rather than at run time. And there is no unnecessary class casting to do.
Andy Nimmo
Greenhorn
Joined: Mar 30, 2003
Posts: 14
posted
Feb 24, 2004 07:23:00
0
Thanks guys.
Both are kind of what I was expecting although I'd overlooked the instanceOf method call.
The reason I asked this question wasn't simply down to my example situation as I have in the past created some generic methods which could accept a number of different objects and act accordingly. I just hate long windedness and do my best to make things clear aond concise, sometimes to overload a method can result in a large quantity of highly similar code which cant be refactored into a second method call without resulting in yet more code.
I have what I need now.
Cheers again,
Andy.
Sajee Joseph
Ranch Hand
Joined: Jan 17, 2001
Posts: 200
posted
Feb 24, 2004 08:18:00
0
Hello,
Please remember that there is a catch in instanceOf operator.
This operator returs true when u use it against a parent class too.
Consider the following code:
class AUpper
{
}
class ALower extends AUpper
{
}
class TestInstanceOf
{
public static void main(
String
[] a)
{
ALower n = new ALower();
if(n instanceof AUpper)
System.out.println("AUpper");
if(n instanceof ALower)
System.out.println("ALower");
}
}
This will print the following:
AUpper
ALower
Cheers
_saj
sever oon
Ranch Hand
Joined: Feb 08, 2004
Posts: 268
posted
Feb 24, 2004 16:00:00
0
Augh! Don't use instanceof!!!
Or at least, before you do, I highly recommend you read
this article
over at
Object Mentor
...if the entire thing is too long-winded for ya, do a search for "dynamic polymorphism" and just read that page or two.
The problem you are grappling with is a common one, but to give a proper approach I'd have to know more about it. I understand you have some method, foo(), to which you are passing two different types. Does this method treat both of these types in the exact same way, or does it treat them differently? This makes a world of difference.
If you execute the same logic on both types, then you should think about the applicability of that logic to other types as well. If that logic can be applied to all objects, no matter what type they are, then go ahead and have it take any Object. (Example: if it returns "hash code:" followed by the hash code of the object, this indeed can be run on any object at all.)
If it only makes sense to pass this method certain kinds of objects, then maybe you want to think about creating an interface for those kinds of objects and have them all implement it. For example, I wrote a system that could chain together certain objects in any kind of order. All of those objects implemented the Chainable interface. At first, this interface didn't specify any behaviors at all, it was just a tagging interface that allowed me to treat all chainable types as...well, Chainable. Later, when we added other requirements to the system it became necessary to have chainable objects all implement a few simple behaviors, so I added those behaviors to the interface and implemented them throughout.
Great--but what if the situation you're talking about treats these different types differently? First of all, I'd question the design. If you're treating two different types completely different, is it possible that you should have two completely different methods? However, if conceptually you are doing the same thing, it's just the process that differs between the two types, then it's ok.
Example time. If you have a method start() that starts a Car object (S.o.println("vroom")) and an Altercation object (S.o.println("You're stupid")), these are two totally unrelated things that happen to use the same verb. They should have two completely different methods with less ambiguous names than "start". On the other hand, if you're trying to calculate the perimeter of a Circle object (return 2*Math.PI*r) and a Rectangle object (return 2*width + 2*height), this is conceptually the same thing.
However, in the latter case, with the rectangle and the circle, you should implement overloaded methods, calcPerimeter(Circle) and calcPerimeter(Rectangle). This is totally appropriate.
Another thing to consider...whenever I find myself implementing overloaded methods, I always check to make sure I've put this functionality in the correct place. In the example above of calculating the perimeter of shapes, I might run into this decision while implementing a Shape class (a final class that is a toolkit for different shapes) with a static method calcPerimeter(). I would stop, though, and wonder if maybe instead I should do away with the toolkit approach--after all, that's the procedural approach, not object-oriented. Instead, I'd prefer to make Shape an abstract base class with an abstract method, getPerimeter(), and have Circle and Rectangle extend it, each providing their own implementation for the method.
If you go this way, then you should consider whether Shape should be an abstract class or an interface. In this application, you might want to mandate that all Shapes track the number of sides they have (a circle one, a rectangle 4, etc). This code could easily be provided at the base class level b/c it's the same for every possible shape:
public class Shape { private int numSides; public Shape( int numSides ) { setNumSides( numSides ); } public int getNumSides() { return numSides; } protected void setNumSides( int ns ) { numSides = ns; } public abstract getPerimeter(); }
Or, you might prefer to make Shape an interface if being a shape isn't the "main job" of the subclasses that would extend it. (For example, if it's more important to the design of your app that they extend another class, like DrawableWidget, that provides a lot of concrete functionality, you'd want to extend that and implement a Shape interface instead.)
For a good discussion on abstraction and stability, check out
this article
over at Object Mentor too.
sev
I agree. Here's the link:
subject: Cast Object back to its original type
Similar Threads
Need to resolve ClassCastException
contains() & equals()!
a question on parameter passing
generic casting with object.getClass() ?
Generics compendium
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton | http://www.coderanch.com/t/372951/java/java/Cast-Object-original-type | CC-MAIN-2014-52 | refinedweb | 1,746 | 62.88 |
in reply to
Re^4: Behold! The power of recursion.
in thread Behold! The power of recursion.
Now one problem of course is that perl does not do this type of optimization :)
sub factorial {
my ($n, $accumulator) = @_;
$accumulator ||= 1;
if ($n == 0) {
return $accumulator;
}
@_ = ($n - 1, $n * $accumulator);
goto &factorial;
}
[download]
sub rec_max {
return () unless @_;
my ($h1, $h2) = (shift, shift);
if (@_) {
unshift @_, ($h1 > $h2) ? $h1 : $h2;
&rec_max; # Try it with and without goto here
}
elsif (defined $h2) {
return ($h1 > $h2) ? $h1 : $h2;
}
}
use List::Util 'shuffle';
print rec_max shuffle(1..2000,1500..3000,3500..8000);
[download]
Indeed!
Benchmark results:
Rate uses_goto normal uses_amp
uses_goto 94719/s -- -10% -14%
normal 105040/s 11% -- -4%
uses_amp 109823/s 16% 5% --
Rate uses_goto normal uses_amp
uses_goto 94448/s -- -8% -14%
normal 103141/s 9% -- -7%
uses_amp 110408/s 17% 7% --
Rate uses_goto normal uses_amp
uses_goto 94148/s -- -11% -14%
normal 105616/s 12% -- -4%
uses_amp 109985/s 17% 4% --
[download]
Benchmark code:
Manually optimize the tail call away yourself and get a very nice improvement in speed. I switched back to your node's parent's code because it was recursive and yours wasn't. I haven't decided where exactly recursion is so utterly losing
Rate normal goto amp unrolled
normal 7.91/s -- -75% -91% -95%
goto 31.9/s 304% -- -63% -81%
amp 85.5/s 981% 168% -- -50%
unrolled 169/s 2044% 431% 98% --
[download]
⠤⠤ ⠙⠊⠕⠞⠁⠇⠑⠧⠊
Yes
No
Other opinion (please explain)
Results (99 votes),
past polls | http://www.perlmonks.org/?node_id=400465 | CC-MAIN-2015-40 | refinedweb | 252 | 63.7 |
IPython
Assorted notes on using the ipython interactive shell. Official documentation is at.
See also: General programming, Python notes pages.
Basics
Pylab - Starting ipython with the
--pylaboption imports numpy and matplotlib libraries into the workspace.
Inline help - Following any object of function name with a "?" gives info or documentation about the item.
Shell access - to run a command in the system shell prefix it with an exclamation point:
!ping
ipdb - ipython has an enhanced debugger (not sure if this is enabled by default :!:)
Magic commands
IPython "magic" commands operate only within IPython, and are prefaced by %. If the flag %automagic is set, then magic commands can be called without the %. (%automagic is on by default.). These commands include:
- Functions that work with code: %run, %edit, %save, %macro, %recall, etc.
- Functions which affect the shell: %colors, %xmode, %autoindent, etc.
- Other functions such as %reset, %timeit or %paste.
- To see all the available magic functions, call %lsmagic.
- Thislists all the core magic functions.`
%run
Run any python script and load all of its data directly into the interactive namespace. Since the file is re-read from disk each time, changes you make to it are reflected immediately (unlike imported modules, which have to be specifically reloaded). %run has special flags for timing the execution of your scripts (-t), or for running them under the control of either Python’s pdb debugger (-d) or profiler (-p).
You can step through a program from the beginning by calling
%run -d
theprogram.py
%who, %whos
Print all interactive variables, with minimal (%who) or extensive (%whos) formatting. If any arguments are given, only variables whose type matches one of these are printed. For example:
#This will list functions and strings, excluding all other types of variables %who function str
%edit
Gives a reasonable approximation of multiline editing, by invoking your favorite editor on the spot. IPython will execute the code you type in there as if it were typed interactively.
%debug, %pdb, %prun
After an exception occurs, you can call
%debug to jump into the Python
debugger (pdb) and examine the problem. Alternatively, if you call
%pdb,
IPython will automatically start the debugger on any uncaught exception.
%prun will run a statement through the python code profiler.
The Qt console
IPython has a ligthweight, Qt enhanced terminal that allows some extra functionality.
- The Debian package: ipython-qtconsole.
- Start with: ipython qtconsole
- arguments can be passed to this start statement (
--pylab)
- type
%guirefto see a quick introduction of its main features.
- Feature description
Features
Multi-line editing
To input and edit code on multiple lines press Ctrl-Enter after the first line. This puts the console in multiline editing mode and additional lines can be added and edited. To run the lines of code append a blank line and hit Enter or use Shift-Enter.
%loadpy
The
%loadpy magic takes any python script (must end in
.py), and
pastes its contents as your next input, so you can edit it before
executing.
Inline plotting
Matplotlib plots can now be displayed right in the terminal window. To
make this happen all the time, start the qtconsole with:
--pylab=inlineMatplotlib figures can also be individually embedded in
the qtconsole workspace using something like
display(gcf())
Saving "notebooks"
Qtconsole activity can be saved in HTML or XHTML, with inline figures in PNG or SVG format. To switch the inline figure format to use SVG during an active session, do:
In [10]: %config InlineBackend.figure_format = 'svg' | https://earthscinotebook.readthedocs.io/en/latest/computing/ipython/ | CC-MAIN-2019-09 | refinedweb | 575 | 56.35 |
One of the reasons why Elasticsearch has become so widely popular is due to how well it scales from just a small cluster with a few nodes to a large cluster with hundreds of nodes. At its heart is the cluster coordination subsystem. Elasticsearch version 7 contains a new cluster coordination subsystem that offers many benefits over earlier versions. This article covers the improvements to this subsystem in version 7. It describes how to use the new subsystem, how the changes affect upgrades from version 6, and how these improvements prevent you from inadvertently putting your data at risk. It concludes with a taste of the theory describing how the new subsystem works.
What is cluster coordination?
An Elasticsearch cluster can perform many tasks that require a number of nodes to work together. For example, every search must be routed to all the right shards to ensure that its results are accurate. Every replica must be updated when you index or delete some documents. Every client request must be forwarded from the node that receives it to the nodes that can handle it. The nodes each have their own overview of the cluster so that they can perform searches, indexing, and other coordinated activities. This overview is known as the cluster state. The cluster state determines things like the mappings and settings for each index, the shards that are allocated to each node, and the shard copies that are in-sync. It is very important to keep this information consistent across the cluster. Many recent features, including sequence-number based replication and cross-cluster replication, work correctly only because they can rely on the consistency of the cluster state.
The coordination subsystem works by choosing a particular node to be the master of the cluster. This elected master node makes sure that all nodes in its cluster receive updates to the cluster state. This is harder than it might first sound, because distributed systems like Elasticsearch must be prepared to deal with many strange situations. Nodes sometimes run slowly, pause for a garbage collection, or suddenly lose power. Networks suffer from partitions, packet loss, periods of high latency, or may deliver messages in a different order from the order in which they were sent. There may be more than one such problem at once, and they may occur intermittently. Despite all this, the cluster coordination subsystem must be able to guarantee that every node has a consistent view of the cluster state.
Importantly, Elasticsearch must be resilient to the failures of individual nodes. It achieves this resilience by considering cluster-state updates to be successful after a quorum of nodes have accepted them. A quorum is a carefully-chosen subset of the master-eligible nodes in a cluster. The advantage of requiring only a subset of the nodes to respond is that some of the nodes can fail without affecting the cluster's availability. Quorums must be carefully chosen so the cluster cannot elect two independent masters which make inconsistent decisions, ultimately leading to data loss.
Typically we recommend that clusters have three master-eligible nodes so that if one of the nodes fails then the other two can still safely form a quorum and make progress. If a cluster has fewer than three master-eligible nodes, then it cannot safely tolerate the loss of any of them. Conversely if a cluster has many more than three master-eligible nodes, then elections and cluster state updates can take longer.
Evolution or revolution?
Elasticsearch versions 6.x and earlier use a cluster coordination subsystem called Zen Discovery. This subsystem has evolved and matured over the years, and successfully powers clusters large and small. However there are some improvements we wanted to make which required some more fundamental changes to how it works.
Zen Discovery lets the user choose how many master-eligible nodes form a quorum using the
discovery.zen.minimum_master_nodes setting. It is vitally important to configure this setting correctly on every node, and to update it correctly as the cluster scales dynamically. It is not possible for the system to detect if a user has misconfigured this setting, and in practice it is very easy to forget to adjust it after adding or removing nodes. Zen Discovery tries to protect against this kind of misconfiguration by waiting for a few seconds at each master election, and is generally quite conservative with other timeouts too. This means that if the elected master node fails then the cluster is unavailable for at least a crucial few seconds before electing a replacement. If the cluster cannot elect a master then sometimes it can be very hard to understand why.
For Elasticsearch 7.0 we have rethought and rebuilt the cluster coordination subsystem:
- The
minimum_master_nodessetting is removed in favour of allowing Elasticsearch itself to choose which nodes can form a quorum.
- Typical master elections now take well under a second to complete.
- Growing and shrinking clusters becomes safer and easier and leaves much less room to configure the system in a way that risks losing data.
- Nodes log their status much more clearly to help diagnose why they cannot join a cluster or why a master cannot be elected.
As nodes are added or removed, Elasticsearch automatically maintains an optimal level of fault tolerance by updating the cluster's voting configuration. The voting configuration is a set of master-eligible nodes whose votes are counted when making a decision. Typically the voting configuration contains all the master-eligible nodes in the cluster. The quorums are simple majorities of the voting configuration: all cluster state updates need agreement from more than half of the nodes in the voting configuration. Since the system manages the voting configuration, and therefore its quorums, it can avoid any chance of misconfigurations that might lead to data loss even if nodes are added or removed.
If a node cannot discover a master node and cannot win an election itself then, starting in 7.0, Elasticsearch will periodically log a warning message describing its current status in enough detail to help diagnose many common problems.
Additionally, Zen Discovery had a very rare failure mode, recorded on the Elasticsearch Resiliency Status Page as "Repeated network partitions can cause cluster state updates to be lost", which can no longer occur. This item is now marked as resolved.
How do I use this?
If you start some freshly installed Elasticsearch nodes with completely default configurations, then they will automatically seek out other nodes running on the same host and form a cluster after a few seconds. If you start more nodes on the same host, then by default they will discover and join this cluster too. This makes it just as easy to start a multi-node development cluster with Elasticsearch version 7.0 as it is with earlier versions.
This fully-automatic cluster formation mechanism works well on a single host but it is not robust enough to use in production or other distributed environments. There is a risk that the nodes might not discover each other in time and might form two or more independent clusters instead. Starting with version 7.0, if you want to start a brand new cluster that has nodes on more than one host, you must specify the initial set of master-eligible nodes that the cluster should use as voting configuration in its first election. This is known as cluster bootstrapping, and is only required the very first time the cluster forms. Nodes that have already joined a cluster store the voting configuration in their data folders and reuse that after a restart, and freshly started nodes that are joining an existing cluster can receive this information from the cluster's elected master.
You bootstrap a cluster by setting the
cluster.initial_master_nodes setting to the names or IP addresses of the initial set of master-eligible nodes. You can provide this setting on the command line or in the
elasticsearch.yml file of one or more of the master-eligible nodes. You will also need to configure the discovery subsystem so that the nodes know how to find each other.
If
initial_master_nodes is not set then brand-new nodes will start up expecting to be able to discover an existing cluster. If a node cannot find a cluster to join then it will periodically log a warning message indicating
master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and [cluster.initial_master_nodes] is empty on this node
There is no longer any special ceremony required to add new master-eligible nodes to a cluster. Simply configure the new nodes to discover the existing cluster, start them up, and the cluster will safely and automatically adapt its voting configuration when the new nodes join. It is also safe to remove nodes simply by stopping them as long as you do not stop half or more of the master-eligible nodes all at once. If you need to stop half or more of the master-eligible nodes, or you have more complex scaling and orchestration needs, then there is a more targeted scaling procedure which uses an API to adjust the voting configuration directly.
How do I upgrade?
You can upgrade an Elasticsearch cluster from version 6 to version 7 via a rolling upgrade or a full-cluster restart. We recommend a rolling upgrade since this allows you to perform the upgrade node-by-node while the cluster remains available throughout. You must upgrade your version 6 cluster to version 6.7 before performing a rolling upgrade to version 7. A full-cluster restart allows you to upgrade to version 7 from any 6.x version, but it involves shutting down the whole cluster and then starting it up again. In either case there have been many more changes to Elasticsearch between versions 6 and 7 than the improvements to cluster coordination described here. To ensure a smooth upgrade you should always follow the detailed upgrade instructions carefully.
If you perform a rolling upgrade then the cluster bootstrapping happens automatically, based on the number of nodes in the cluster and any existing
minimum_master_nodes setting. This means it is important to ensure this setting is set correctly before starting the upgrade. There is no need to set
initial_master_nodes here since cluster bootstrapping happens automatically when performing this rolling upgrade. The version 7 master-eligible nodes will prefer to vote for version 6.7 nodes in master elections, so you can normally expect a version 6.7 node to be the elected master during the upgrade until you have upgraded every one of the master-eligible nodes.
If you perform a full-cluster restart upgrade, then you must bootstrap the upgraded cluster as described above: before starting the newly-upgraded cluster you must first set
initial_master_nodes to the names or IP addresses of the master-eligible nodes.
In versions 6 and earlier there are some other settings that allow you to configure the behaviour of Zen Discovery in the
discovery.zen.* namespace. Some of these settings no longer have any effect and have been removed. Others have been renamed. If a setting has been renamed then its old name is deprecated in version 7, and you should adjust your configuration to use the new names:
The new cluster coordination subsystem includes a new fault detection mechanism. This means that the Zen Discovery fault detection settings in the
discovery.zen.fd.* namespace no longer have any effect. Most users should use the default fault detection configuration in versions 7 and later, but if you need to make any changes then you can do so using the
cluster.fault_detection.* settings.
Safety first
Versions of Elasticsearch before 7.0 sometimes allowed you to inadvertently perform a sequence of steps that unsafely put the consistency of the cluster at risk. In contrast, versions 7.0 and later will make you fully aware that you might be doing something unsafe and will require confirmation that you really do want to proceed.
For example, an Elasticsearch 7.0 cluster will not automatically recover if half or more of the master-eligible nodes are permanently lost. It is common to have three master-eligible nodes in a cluster, allowing Elasticsearch to tolerate the loss of one of them without downtime. If two of them are permanently lost then the remaining node cannot safely make any further progress.
Versions of Elasticsearch before 7.0 would quietly allow a cluster to recover from this situation. Users could bring their cluster back online by starting up new, empty, master-eligible nodes to replace any number of lost ones. An automated recovery from the permanent loss of half or more of the master-eligible nodes is unsafe, because none of the remaining nodes are certain to have a copy of the latest cluster state. This can lead to data loss. For example, a shard copy may have been removed from the in-sync set. If none of the remaining nodes know this, then that stale shard copy might be allocated as a primary. The most dangerous part of this was that users were completely unaware that this sequence of steps had put their cluster at risk. It might be weeks or months before a user notices any inconsistency.
In Elasticsearch 7.0 and later this kind of unsafe activity is much more restricted. Clusters will prefer to remain unavailable rather than take this kind of risk. In the rare situation where there are no backups, it is still possible to perform this kind of unsafe operation if absolutely necessary. It just takes a few extra steps to confirm that you are aware of the risks and to avoid the chance of doing an unsafe operation by accident.
If you have lost half or more of the master-eligible nodes, then the first thing to try is to bring the lost master-eligible nodes back online. If the nodes' data directories are still intact, then the best thing to do is to start new nodes using these data directories. If this is possible then the cluster will safely form again using the most up-to-date cluster state.
The next thing to try is to restore the cluster from a recent snapshot. This brings the cluster into a known-good state but loses any data written since you took the snapshot. You can then index any missing data again, since you know what the missing time period is. Snapshots are incremental so you can perform them quite frequently. It is not unusual to take a snapshot every 30 minutes to limit the amount of data lost in such a recovery.
If neither of these recovery actions is possible then the last resort is the
elasticsearch-node unsafe recovery tool. This is a command-line tool that a system administrator can run to perform unsafe actions such as electing a stale master from a minority. By making the steps that can break consistency very explicit, Elasticsearch 7.0 eliminates the risk of unintentionally causing data loss through a series of unsafe operations.
How does it work?
If you are familiar with the theory of distributed systems you may recognise cluster coordination as an example of a problem that can be solved using distributed consensus. Distributed consensus was not very widely understood when the development of Elasticsearch started, but the state of the art has advanced significantly in recent years.
Zen Discovery adopted many ideas from distributed consensus algorithms, but did so organically rather than strictly following the model that the theory prescribes. It also has very conservative timeouts, making it sometimes recover very slowly after a failure. The introduction of a new cluster coordination subsystem in 7.0 gave us the opportunity to follow the theoretical model much more closely.
Distributed coordination is known to be a difficult problem to solve correctly. We heavily relied on formal methods to validate our designs up-front, with automated tooling providing strong guarantees in terms of correctness and safety. You can find the formal specifications of Elasticsearch's new cluster coordination algorithm in our public Elasticsearch formal-models repository. The core safety module of the algorithm is simple and concise and there is a direct one-to-one correspondence between the formal model and the production code in the Elasticsearch repository.
If you are acquainted with the family of distributed consensus algorithms that includes Paxos, Raft, Zab and Viewstamped Replication (VR), then the core safety module will look familiar. It models a single rewritable register and uses a notion of a master term that parallels the ballots of Paxos, the terms of Raft and the views of VR. The core safety module and its formal model also covers cluster bootstrapping, persistence across node restarts, and dynamic reconfiguration. All of these features are important to ensure that the system behaves correctly in all circumstances.
Around this theoretically-robust core we built a liveness layer to ensure that, no matter what failures happen to the cluster, once the network is restored and enough nodes are online a master will be elected and will be able to publish cluster state updates. The liveness layer uses a number of state-of-the-art techniques to avoid many common issues. The election scheduler is adaptive, altering its behavior according to network conditions to avoid an excess of contested elections. A Raft-style pre-voting round suppresses unwinnable elections before they start, avoiding disruption by rogue nodes. Lag detection prevents nodes from disrupting the cluster if they fall too far behind the master. Active bidirectional fault detection ensures that the nodes in the cluster can always mutually communicate. Most cluster state updates are published efficiently as small diffs, avoiding the need to copy the whole cluster state from node to node. Leaders that are gracefully terminated will expressly abdicate to a successor of their choosing, reducing downtime during a deliberate failover by avoiding the need for a full election. We developed testing infrastructure to efficiently simulate the effects of pathological disruptions that could last for seconds, minutes, or hours, allowing us to verify that the cluster always recovers quickly once the disruption is resolved.
Why not Raft?
A question we are commonly asked is why we didn't simply "plug in" a standard distributed consensus algorithm such as Raft. There are quite a number of well-known algorithms, each offering different trade-offs. We carefully evaluated and drew inspiration from all the literature we could find. One of our early proofs-of-concepts used a protocol much closer to Raft. We learned from this experience that the changes required to fully integrate it with Elasticsearch turned out to be quite substantial. Many of the standard algorithms also prescribe some design decisions that would be suboptimal for Elasticsearch. For example:
- They are frequently structured around a log of operations, whereas Elasticsearch's cluster coordination is more naturally based directly on the cluster state itself. This enables vital optimisations such as batching (combining related operations into a single broadcast) more simply than would be possible if it were operation-based.
- They often have a fairly restricted ability to grow or shrink clusters, requiring a sequence of steps to achieve many maintenance tasks, whereas Elasticsearch's cluster coordination can safely perform arbitrary reconfigurations in a single step. This simplifies the surrounding system by avoiding problematic intermediate states.
- They often focus heavily on safety, leave open the details of how they guarantee liveness, and do not describe how the cluster should react if it finds a node to be unhealthy. Elasticsearch's health checks are complex, having been used and refined in the field for many years, and it was important for us to preserve its existing behaviour. In fact it took much less effort to implement the system's safety properties than to guarantee its liveness. The majority of the implementation effort focussed on the liveness properties of the system.
- One of the project goals was to support a zero-downtime rolling upgrade from a 6.7 cluster running Zen Discovery to a version 7 cluster running with the new coordination subsystem. It did not seem feasible to adapt any of the standard algorithms into one that allowed for this kind of rolling upgrade.
A full, industrial-strength implementation of a distributed consensus algorithm takes substantial effort to develop and must go beyond what the academic literature outlines. It is inevitable that customizations will be needed in practice, but coordination protocols are complex and any customizations carry the risk of introducing errors. It ultimately made the most sense from an engineering perspective to treat these customizations as if developing a new protocol.
Summary
Elasticsearch 7.0 ships with a new cluster coordination subsystem that is faster, safer, and simpler to use. It supports zero-downtime rolling upgrades from 6.7, and provides the basis for resilient data replication. To try out the new cluster coordination subsystem, download the latest 7.0 beta release, consult the docs, take it for a spin, and give us some feedback. | https://www.elastic.co/blog/a-new-era-for-cluster-coordination-in-elasticsearch | CC-MAIN-2020-29 | refinedweb | 3,506 | 52.9 |
This tutorial will teach you how to take input from a user in C# using the Console.ReadLine() method. When writing a computer program, you will often rely on a user interacting with your program. For example, your program may ask the user a question, wait for a response, and then do something with the information the user provides. Creating these types of dynamic user interactions becomes increasingly important as we write more complex programs in C#.
That is precisely the type of program we will be writing in this example.
C# Console.ReadLine() Example
In this example, we will write a .NET Core console application that gathers information from an end user and then we will process that data and print some results to the screen. Start by creating a new Project (File > New > Project…) and select the .NET Core Console App template. You can call the project UserInteraction.
You will be shown a boilerplate template that you are, by now, familiar with. This example will incorporate several features from our previous tutorials. Recall in our first C# program, we called the
Console.ReadLine() method to force our console application to wait for user input and prevent the terminal window from immediately closing after our code was executed. In this example, that same method will serve as the basis for our user interaction.
In the previous lesson, we learned about different data types and how to declare variables. In this example, we will be using variables of type Integer and String.
Let’s get started by adding the following lines to your Main() method:
Console.WriteLine("What is your name?"); Console.Write("Type your first name: ");
You will notice in Line 10, we use
Console.Write() instead of
Console.WriteLine(). The difference between the two methods is that the
.WriteLine() method of the Console class appends a new line character after the string, forcing the cursor to move to the next line. The
Write() method, on the other hand, will print a string and leave the cursor at the end of the line, without moving to a new line.
Now we will declare a string variable and use it to capture the user’s input. Add the highlighted lines to your code:
Console.WriteLine("Tell me about yourself."); Console.Write("Type your first name: "); string myFirstName; myFirstName = Console.ReadLine();
In line 12, we declared a string variable called
myFirstName. Then, in line 13, we used the assignment operator ( = ) to assign a value to that variable. We want
myFirstName to have the value of whatever the user types in the console. When the user types his name and presses the Enter key, the name is saved to the variable
myFirstName and can be used elsewhere in your program.
Let’s get some more information from the user. Let’s ask for the user’s last name. What type of variable should we use?
Console.WriteLine("Tell me about yourself."); Console.Write("Type your first name: "); string myFirstName; myFirstName = Console.ReadLine(); Console.Write("Type your last name: "); string myLastName = Console.ReadLine();
We use another String variable to save the user’s last name. Notice on Line 16, we declare a variable and assign a value all on one line. This is a shorter way to do what took us two lines in lines 12-13. It is good practice to assign a value to a variable as soon as possible after we declare it. Moreover, our code is shorter and easier to read when we organize it this way.
Getting an Integer from Console.ReadLine()
Next, let’s ask the user for the year he/she was born. We will want to store that value in a new valuable, and make some calculations with it in order to determine the person’s age. What variable data type do you think we should use?());
If we only wanted to show the value somewhere else in our application, we could just use a String like we did when we were taking the user’s name. However, we will be performing mathematical calculations, and we can’t do that on Strings. We need to store the value as some type of number. A birth year, like 1980, is a whole number value, so we can declare an Integer variable. I will use the tip we learned earlier to declare a variable and assign it a value on one line (Line 19).
Observe that we have added some special code,
Convert.ToInt32(), in order to save the value of
Console.ReadLine() to an Integer data type.
Console.ReadLine() expects a sequence of alphanumeric Unicode characters, a String. We can’t assign a String to an Integer variable. The data types don’t match. We will use the
ToInt32() method of the
Convert class in order to parse our String as an Integer. With this method, we can convert an alphanumeric string into a whole number and be able to use it to make calculations in our program.
Note: In a real application, we would want to include some error checking to ensure that the value entered is actually a number before we try to convert it.
A Complete Console.ReadLine() Example
Until now, we have simply saved the user’s input to a variable, but we haven’t actually done anything with that data. To finish our program, we will want to present some content to the end user based on the data they have provided.
using System; namespace UserInteraction { class Program { static void Main(string[] args) {()); Console.WriteLine(); Console.WriteLine("Hello, " + myFirstName + " " + myLastName); int myAge = DateTime.Now.Year - myBirthYear; Console.WriteLine("This year, you will be " + myAge + " years old."); int newAge = myAge + 5; Console.WriteLine("In 5 years, you will have " + newAge + " years."); Console.ReadLine(); } } }
On Line 21, we are simply inserting a blank line into the console to provide some spacing between the questions we have asked and the output we are about to display. On Line 22, we are joining our strings together with the Concatenation Operation ( + ). Notice, we can write a string inside double quotes like we have been doing, but we can also reference our existing string variables by their name. Since these variables were assigned values based on the user’s input, their first and last name will appear in our console application.
On Line 24, we calculate the user’s age based on the year they were born. We do this by subtracting the user’s birthyear, a value the user provided, from the current year which we retrieve using
DateTime.Now.Year. There are a lot of interesting things we can do with a
DateTime variable, so that will be a separate tutorial of its own! For now, it is sufficient to say that
DateTime.Now fetches the current local date and time, and
Year is a property of the DateTime structure that gets the year represented by our
DateTime instance.
On Line 27, we make another calculation with an Integer variable. This time, we want to tell the user how old they will be in five years. We will use the age we just computed on Line 24, and simply add 5 to that value. Finally, we write this to the console and our application is complete. If you run your interactive program, you should see something like the following:
The Bottom Line
In this tutorial, we learned how we can use the
Console.ReadLine() method to capture a user’s input. We learned how to save those values as String variables in order to retrieve them later in our program. We learned that
Console.ReadLine() returns a String, so if we want to save that String as a number, we have to make a conversion. Once we have the variable saved as an Integer data type, we can retrieve it to make calculations.
What useful ideas do you have for the
Console.ReadLine() method? Let me know in the comments! As always, if you have any questions, let me know below.
1 thought on “Get User Input in C# with Console.ReadLine()”
Good post, Found a tutorial link which also includes both Console.Readline and Console.Read in C# which also gives example and explain differences. | https://wellsb.com/csharp/beginners/get-user-input-in-csharp-with-console-readline/?replytocom=309 | CC-MAIN-2020-50 | refinedweb | 1,371 | 66.13 |
User:Hdgcfcf/TalkPageFull
From Uncyclopedia, the content-free encyclopedia
edit Welcome!
Hello, Hdgcfcf/TalkPageFull, Welcome to UnNews
You'll notice I;ve retitled UnNews:Scientific genius discovers proof of major math problem and tagged it as ICU (needs work)., but do not despair! Read on, takd some hints from other articles, and see if you can make if prettier, fi not more betterer. Cheers!
Rev. Zim (Talk) Get saved! 20:03, 14 March 2007 (UTC)
Reverend Zim_ulator says: "There are coffee cup stains on this copy, damnit! Now that's good UnJournalism."
Welcome to UnNews, Hdgcfcf,:Autobahn Minimum Speed Expected to Rise
Only two minor things, whick I've changed:
- You need to put the "http://" in the url field of the Sources template.
- I have an idiosyncrasy of putting quotes in italics, eg "quote", and I like to see it in UnNews because it's aesthetically more appealing and it makes quotes stand out. So, it's not a rule, but it'w what I like, and I am an UnNews god, so, draw your own conclusions.:00, 15 March 2007 (UTC)
edit Re:Forum:Cleanup on India
Sure, I'll help out. I'd suggest you put the article in a page in your namespace or something first though if you haven't already. --Hotadmin4u69 [TALK] 03:48, 7 June 2007 (UTC)
- Ah, it seems we're in the same time zone. But I won't be up that early. I could begin work on it tonight actually. --Hotadmin4u69 [TALK] 04:04, 7 June 2007 (UTC)
edit Test
Testing stuff out
- 05:28, 8 June 2007 (UTC) | http://uncyclopedia.wikia.com/wiki/User:Hdgcfcf/TalkPageFull | CC-MAIN-2015-18 | refinedweb | 269 | 72.87 |
Nov 16, 2006 04:39 PM|haoest|LINK
How do I limit the length for a string using RangeValidator instead of RegularExpressionValidator?
Say, I can simply set a RegularExpressionValidator's ValidationExpression to "(\s|.){0,5}" if i wanted to make sure an input has at most 5 characters.
But how do I do this with RangeValidator?
I tried setting RangeValidator's min value to 1 space, and max value to "~~~~~" because ascii value of a space is 32, and ascii value of "~" is 127. I was hoping the validator checks the input lexigraphically, but it did not work.
Any pointers?
Nov 17, 2006 03:58 AM|Jasson_King|LINK
I think RangeValidator is not a good validator to validate the length of the string.You'd better use RegularExpressionValidator to validate it.
Range validator control is a validator control which checks to see if a control value is within a valid range. The attributes that are necessary to this control are:
MaximumValue, MinimumValue, and Type.
Here are some sample codes:
>
Nov 18, 2006 04:13 AM|haoest|LINK
Jasson, you are right, a regularexpressionvalidator is better to validator the length of an input.
My question is more about how does a RangeValidator compare string types.
If I put Minimium value as capital A and maximium value as small z, then its equal to [a-zA-Z] in regular expression.
But according to some source, it says range validators compare strings by ascii value, if that were true, characters 0-9 lie between A and z, so an equivalent regular expression would be [0-9a-zA-Z], which is not the case... that's why I am so confused and hope you guys have some expertise.
msdn's doc on RangeValidator class isn't precise enough, at least not the part I looked at.
Nov 18, 2006 06:00 PM|PLBlum|LINK
It does compare by ASCII values.
Your confusion is that 0-9 is followed by A-Z which is followed by a-z.
Nov 19, 2006 07:22 PM|haoest|LINK
I did a small test, here's my code:
RangeValidator1.MinimumValue = "0"; RangeValidator1.MaximumValue = "z"; RangeValidator1.ControlToValidate = txt1.ID; string s = ""; for (int i = 0; i < 128; i++) { txt1.Text = ""+ (char)i ; RangeValidator1.Validate(); if (RangeValidator1.IsValid) { s += (char)i + " (" + i + ") is valid<br>"; } }
// ASCII( '0' ) = 48
// ASCII( 'z' ) = 122
Guess what the output is?
(9) is valid
(10) is valid
(11) is valid
(12) is valid
(13) is valid
(32) is valid
0 (48) is valid
... '1' to '8' are removed for readability
9 (57) is valid
A (65) is valid
... 'B' to 'X' are valid and removed for readability
Y (89) is valid
[ where did 'Z' go ?? ]
[ and where did the puncutations such as '[', '\', and ']' go? ]
a (97) is valid
... 'b' to 'y' are valid are removed for readability
z (122) is valid
See? it does compare by ascii value, but not totally. I can't find a fitting explaination
A slightly different version of the test, by changing to
RangeValidator1.MinimumValue = "" + (char) 0;
RangeValidator1.MaximumValue = "" + (char) 127;
the output diffferents significantly. Only characters with ascii value 0 - 32, plus 127, are valid.
WHY so if it compares by ascii value?
Nov 20, 2006 01:23 AM|Jasson_King|LINK
Hi,
I give you a detailed explanation about asp:RangeValidator control for your reference.
The RangeValidator control is used to check that the user enters an input value that falls between two values. It is possible to check ranges within numbers, dates, and characters.
Note:
1.The validation will not fail if the input control is empty. Use the RequiredFieldValidator control to make the field required.
2.The validation will not fail if the input value cannot be converted to the data type specified. Use the CompareValidator control, with its Operator property set to ValidationCompareOperator.DataTypeCheck, to verify the data type of the input value.
3.Specifies the data type of the value to check. The types are:
Nov 20, 2006 04:43 PM|haoest|LINK
The overview about RangeValidator is legitimate, but my question really is technically how it compares the input string against minimiumvalue and maximum value to make sure it's valid.
Nov 20, 2006 05:10 PM|PLBlum|LINK
Here is the exact function used by the client-side validation code to compare two values:
function ValidatorCompare(operand1, operand2, operator, val) {
var dataType = val.type;
var op1, op2;
if ((op1 = ValidatorConvert(operand1, dataType, val)) == null)
return false;
if (operator == "DataTypeCheck")
return true;
if ((op2 = ValidatorConvert(operand2, dataType, val)) == null)
return true;
switch (operator) {
case "NotEqual":
return (op1 != op2);
case "GreaterThan":
return (op1 > op2);
case "GreaterThanEqual":
return (op1 >= op2);
case "LessThan":
return (op1 < op2);
case "LessThanEqual":
return (op1 <= op2);
default:
return (op1 == op2);
}
}
Here is the evaluation function for the RangeValidator.
function RangeValidatorEvaluateIsValid(val) {
var value = ValidatorGetValue(val.controltovalidate);
if (ValidatorTrim(value).length == 0)
return true;
return (ValidatorCompare(value, val.minimumvalue, "GreaterThanEqual", val) &&
ValidatorCompare(value, val.maximumvalue, "LessThanEqual", val));
}
You can see its using the > = < operators in javascript to compare strings. Those work based on ASCII comparisons.
Nov 21, 2006 03:26 AM|Jasson_King|LINK
PLBlum, I agree with you.It does ASCII comparision in the asp:RangeValidator control.
As we have known,the ASCII values of '0' to '9' are 48 to 57, the ASCII values of 'a' to 'z' are 97 to 122 , and the ASCII values of 'A' to 'Z' are 65 to 90.
If we set '0' as minValue and 'z' as maxValue of a asp:RangeValidator control,"0~9","a~z" and "A~Z" should pass validation in the range validator.
Nov 21, 2006 06:48 PM|PLBlum|LINK
If you use those three strings: 0~9, a~z, and A~Z, I would expect them to be within the range.
If you use "z1", it is a "bigger" value than "z" (your max value) and should fail validation.
If you want to experiment, use the CompareValidator to compare just one part of the range, min or max, so you can test individual cases.
<asp:TextBox id="TExtBox1" runat=server /><br />
<asp:TextBox id="TextBox2" runat=server /><br />
<asp:CompareValidator
Nov 21, 2006 08:46 PM|haoest|LINK
I was trying to figure out more ways to do one thing.
RangeValidator does compare strings by ASCII value. But there are also other conditions added during the comparison.
if I have a RangeValidator RV.
RV.min = "!"; // 33
RV.max = "~"; //126
and if I put in 'a' as input, client side validation passes thinking it's valid. But server side actually turns the error message on thinking it's invalid... which means...
RangeValidator's client-side and server-side validation functions do not algorithmically equal (when validating strings at least.)
Jan 11, 2010 12:05 PM|nikitagon|LINK
function<div id="refHTML"></div>
Member
2 Points
Jul 06, 2011 11:20 AM|BasharSS|LINK
nikitagonfunction<div id="refHTML"></div>
Thanks alot nikitagon for your helpful replay
13 replies
Last post Jul 06, 2011 11:20 AM by BasharSS | http://forums.asp.net/t/1046041.aspx?how+to+use+RangeValidator+for+String+type+ | CC-MAIN-2014-10 | refinedweb | 1,166 | 56.55 |
Program using Input file, While loops, decisions (if/else), control break logic, accumulator
Morgan Grant
Greenhorn
Joined: Jul 29, 2012
Posts: 1
posted
Jul 29, 2012 19:06:03
0
Hello,
I am new here, and to Java (in fact, I'm taking a programming
logic
course, not even a Java course, so anything I know, I have learned from A Beginner's Guide to Programming Logic and Design (Farrell) and Java Programs to Accompany Programming Logic and Design (Smith). In addition, please feel free to suggest previous topics that answer my questions, links to pages/free resources I can read to learn about beginning Java, etc. I admit that I am coding without really understanding what I'm doing; I look through my books and on the Internet for possible solutions or tips. I am
trying
to understand, but, well, I need help.
What the program should do:
It should take data (5 lines of integers, with each line having 5 different numbers, white space delimited) from an input file. The 5 numbers on each line represent the from area code, from phone number, to area code, to phone number, and number of minutes the call lasted (in that order). The 5th and final line is just 0s, for the sentinel value.
As long as it doesn't reach a sentinel value of 0 for the "from area code," the program loops, calculating the cost of the call.
Decision portion: the program has to make decisions, depending on the area codes and length of the call, when calculating the cost of a call.
Control break logic portion: If the "to area code" changes, the program should calculate the total cost of the calls to that particular area code (accumulator portion).
The output should look like:
Call Log and Billing Report
===========================
Call from 123 4567890 to 321 7777700 for 25 minutes costs $4.0
Call from 123 4567890 to 321 5555555 for 15 minutes costs $2.0
Total costs for calls to 321 is $6.0
Call from 123 4567890 to 800 2222222 for 10 minutes costs $0.0
Call from 123 4567890 to 800 3333333 for 10 minutes costs $0.0
Total costs for calls to 800 is $0.0
I don't think it matters if the output to the screen happens as the program runs or if it comes only after the program has finished (reached the sentinel value). I would think the former makes more sense, but I'm not sure if one is easier to code.
My questions:
Are lines 39 and 40 necessary?
Line 42: should it be an "if" or a "while"?
There is something wrong with lines 42 and 102 (error says "incompatible types") and 52 and 96 ("int cannot be dereferenced"). Should 0 instead be "null"?
/* Class : STB2 * Author: MS * Design: Java program to calculate and report the cost of an input call record */ import java.io.*; public class STB2 { public static void main(String[] args) throws Exception { final int FREE_AREA = 800; // named constants final int TIER_MINUTES = 20; final double CONNECTION_FEE = .50; final double FREE_CALL = .0; final double LOW_RATE = .10; final double HIGH_RATE = .15; int fromArea; // input variables int fromPhone; int toArea; int toPhone; int callLength; double callCost; // calculated cost double callTotal; // total for same area codes calls // Work done in the getReady() method FileReader fr = new FileReader("STT.txt"); BufferedReader br = new BufferedReader(fr); String str = ""; callTotal = 0; int oldToArea; boolean done; done = false; oldToArea = toArea; fromArea = Integer.parseInt(stringFromArea); toArea = Integer.parseInt(stringToArea); if((fromArea = br.readLine()) != 0) { done = false; oldToArea = toArea; } else done = true; while(done == false) { // This is the work done in the produceReport() method if(toArea.compareTo(oldToArea) != 0) { // logic to determine applicable call rates { if (toArea == FREE_AREA) // call to free area code { callCost = FREE_CALL; } else { if (toArea == fromArea) // call to same area code { if (callLength <= TIER_MINUTES) { callCost = FREE_CALL; } else { callCost = LOW_RATE * (callLength - TIER_MINUTES); } } else // calls to external area { if (callLength <= TIER_MINUTES) { callCost = CONNECTION_FEE + LOW_RATE * callLength; } else { callCost = CONNECTION_FEE + HIGH_RATE * TIER_MINUTES + LOW_RATE * (callLength - TIER_MINUTES); } } } } { System.out.println("Call Log and Billing Report"); System.out.println("==========================="); System.out.print("Call from " + fromArea + " " + fromPhone); System.out.print(" to " + toArea + " " + toPhone + " for " + callLength); System.out.println(" minutes costs $" + callCost); } // This is the work done in the controlBreak() method if(toArea.compareTo(oldToArea) != 0) { System.out.println("\t\t\tTotal for all calls into " + oldToArea + " is " + callCost); callTotal = 0; oldToArea = toArea; } if((fromArea = br.readLine()) != 0) { done = false; } else done = true; } // This is the work done in the finishUp() method System.out.println("\t\t\tTotal for all calls into " + oldToArea + " is " + callCost); br.close(); System.exit(0); } } }
Stephan van Hulst
Bartender
Joined: Sep 20, 2010
Posts: 3611
14
I like...
posted
Jul 29, 2012 20:51:03
0
Hi Morgan, welcome to CodeRanch!
Let's start of with the logic portion. Your main loop should look something like this:
String line = reader.readLine(); int fromArea = Integer.parseInt(line); while (fromArea != 0) { line = reader.readLine(); int fromPhone = Integer.parseInt(line); // etc. line = reader.readLine(); fromArea = Integer.parseInt(line); }
Here you can see why it's giving you a type error: You're trying to assign a
String
value to an int variable. You need to parse it first.
Now that that's out of the way, here's something more important. Your coding style is very monolithic. I expect that you're new to Object Oriented Programming. You should break up your program into more manageable units, it will become much easier for you to reason about it, and it will be much more easier to read.
Have you gone through the Java tutorials yet? They will help you on the way with OO programming. Google for Oracle Java tutorials.
Campbell Ritchie
Sheriff
Joined: Oct 13, 2005
Posts: 38334
23
posted
Jul 30, 2012 01:39:46
0
Welcome again
Agree with Stephan that you should break up that main method. In my opinion, a main method contains one statement, even though you see main methods a whole page long in some books.
You also need objects, which need classes to create them from. I disagree with Stephan about the Java Tutorials. They are the most comprehensive source of advice about Java known to modern science, but I don’t think they are very good for teaching the conventions of object‑oriented programming.
I suggest you probably want a PhoneCall class, and PhoneNumber class, and PhoneBill class. Maybe also a PhoneArea class.
Maybe also a PhoneCharges class. If you use that class, it would contain the list of charges as public static final fields (used as constants), a private constructor (to stop anybody creating an instance of that class), and maybe nothing else.
I agree. Here's the link:
subject: Program using Input file, While loops, decisions (if/else), control break logic, accumulator
Similar Threads
please help me
please help me
can someone please help me
The Art & Science of Java Chapter 4 Exercise 5 AverageList.java
Sombody please help this Noob!
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton | http://www.coderanch.com/t/588410/java/java/Program-Input-file-loops-decisions | CC-MAIN-2014-35 | refinedweb | 1,179 | 64.1 |
This article explains how to create a simple Secure Chat with WebCam support using Pfz.Remoting.
Pfz.Remoting
I've already written some articles about communication, in special Pfz.Remoting and Creating an Asymmetric/Symmetric Secure Stream without SSL, which is in fact part of the remoting DLL. This article will explain how to create a simple chat program using this remoting technology, which supports web-cam and sending files.
I then wanted to create an article explaining how to create a simple multi-user chat but, when I saw AForge and its video capture capabilities, I wanted to create a chat program with web-cam support. In the end, I created a very basic chat, almost without exception handling, but the important part is that it supports web-cam and audio capture with multiple clients.
The principle is very simple. Data can be lost. This is not a very big problem for video, it can be for audio, but it is not possible to simple avoid it.
For example, the program receiving the web-cam frames can receive 10 frames, but the client which receives it can only be able to receive 5. Intermediate frames must be lost.
Also, in the architecture I made for this sample, everything goes throught the server. So, the web-cam owner receives 10 frames. The server receives only seven frames. And it can distribute the frames it receives among its clients, which one can be able to receive them all (the 7), another can receive only 3 and another can receive only 5. So, how do we distribute the data?
I created two main classes that help the distribution of data that "can be lost". The ActualValueEnumerator is a class that "enumerates" throught its values, and waits for new values when all values are already read. But, if two or more values appears before the "enumerator" is able to read them, only the last one (the actual) remains. That's why I called the class ActualValueEnumerator.
ActualValueEnumerator
In the first version of the article I made the WebCamEnumerator class using ManualResetEvent, but it was ugly. Now it is very simple. So, look at the code:
WebCamEnumerator
ManualResetEvent
using System.Drawing;
using System.Drawing.Imaging;
using System.IO;
using AForge.Video;
using AForge.Video.DirectShow;
using Pfz.Collections;
namespace SecureChat.Client
{
public sealed class WebCamEnumerator:
ActualValueEnumerator<byte[]>
{
private volatile VideoCaptureDevice fDevice;
public WebCamEnumerator(string monikerString)
{
var device = new VideoCaptureDevice(monikerString);
fDevice = device;
device.DesiredFrameSize = new Size(160, 120);
device.NewFrame += p_NewFrame;
device.Start();
}
protected override void Dispose(bool disposing)
{
var device = fDevice;
if (device != null)
{
fDevice = null;
device.SignalToStop();
device.WaitForStop();
}
base.Dispose(disposing);
}
private void p_NewFrame(object sender, NewFrameEventArgs eventArgs)
{
var image = (Bitmap)eventArgs.Frame;
using(var stream = new MemoryStream())
{
image.Save(stream, ImageFormat.Jpeg);
ActualValue = stream.ToArray();
}
}
}
}
The important things here are: In the constructor, it starts the web-cam and sets the NewFrame event. In the dispose it stops the web-cam. And, in the new frame, it generates the byte array of the image and sets the ActualValue to such bytes. I convert the image to bytes so the server does not need to know how to process graphics... it only receives bytes and distribute bytes to the clients. It may not be "visible" in the first moment, but as this class inherits from ActualValueEnumerator, it can be used in a foreach (must use Pfz.Extensions.FastEnumeratorExtensions and the AsEnumerable method) and it already deals with the possibility of loosing frames.
NewFrame
ActualValue
Pfz.Extensions.FastEnumeratorExtensions
AsEnumerable
The server is always a bit more complicated, and I must admit I made it hard. I will try to make it easier and re-update the article, but to help the server be less-complicated I created the EnumeratorDistributor class. Similar to the ActualValueEnumerator, it allows values to be lost. But, instead of being an enumerator itself, it must be used over another enumerator, and each client gets its own enumerator which allows the frames to be lost. So, different speed clients will receive different number of values. So, to use it, the server receives the enumerator from one client (for example, the server receives the WebCamEnumerator from one client), and creates an EnumeratorDistributor over it. Each client that is now interested in such web-cam will get a new enumerator (throught the CreateClient method), which will always receive the last frame from such base enumerator. If frames are to be skipped, they will be.
EnumeratorDistributor
The project As I said earlier, the server is still more complicated than it needs to be, but the basic idea here is: There is the Common library, in which the classes and interfaces used by both client and server are present. Note that any method the client needs to call in the server or that the client must call in the client must be done via interfaces, which are here.
There is the server project, which supports the client connections and implements the server part, so it is responsible for telling all other clients when a new one connects and to find another client, requests its web-cam or sound-capture device and returning the appropriate enumerator when someone asks for the web-cam or sound from another client. And the client project. It is "simpler" than the server as it does not need to deals with many clients, but it must be able to capture video and sound and must be able to play them. Actually, when one client wants the video or sound from another one, it must request the Server for such information, which will be returned as a IFastEnumerator. The IFastEnumerator is faster than a custom enumerator over the TCP/IP, as instead of first calling "MoveNext" (which sends and receives information) and then Current (which agains sends and receives information) it uses only GetNext, which can return null to tell there are no more values. Well, I hope this little explanation and the source-code is enough for anyone trying to create an application which remote web-cam or sound-capture support.
IFastEnumerator
If you want to try the samples, you must:
The program supports simple chat and file transfer (drag-and-drop a file over a user name). Dragging a file to your own name will do a really wasteful work of sending the file to the server and then sending the file back to the. | http://www.codeproject.com/Articles/47898/Web-Cam-SecureChat?msg=3948130 | CC-MAIN-2016-40 | refinedweb | 1,067 | 55.24 |
POE::Component::ProcTerminator - Safely and easily kill runaway processes
This is a POE component wrapping the functionality of Proc::Terminator, but using POE's timers and event loop instead of the built-in (and obviously blocking) sleep-based loop.
Set up the component..
POE::Component::ProcTerminator->spawn(Alias => "proc_terminator");
Later, in your code..
$_[KERNEL]->call(proc_terminator => terminate => $a_pid, { siglist => [ SIGINT, SIGTERM, SIGKILL ] });
And that's all!
Semantics here are identical to that specified in Proc::Terminator. This manpage describes feature specific to the POE component
If the
Alias parameter is missing, then it will spawn the default session, called
proc_terminator
Further options are taken as defaults to be used for subsequent calls to "terminate"
Instruct the component to begin trying to kill
$pid, where
$pid can either be a single PID, or an array reference of PIDs.
The second argument may be an optional hashref of options, accepting the same kinds of things that Proc::Terminator's
proc_terminate does.
The more relevant options are
A list of signals which will be used to kill off the process. Signals are tried in succession, until the process is either dead or there are no more signals left in the list. The defaults exist in
@Proc::Terminator::DefaultSignalOrder
An interval (in seconds) to wait between shifting the signal list (specified in
siglist). In the world of POE, this effectively means how often the timer event will be called.
Some additional options:
If for whatever reason
Proc::Terminator is unable to kill your process, this callback will be invoked with a
Proc::Terminator::Batch as its sole argument. See Proc::Terminator for documentation on that object.
Control the behavior of
POE::Component::ProcTerminator when the session terminates prematurely (e.g. the kernel is being shut down, or some other exceptional terminal condition has occured).
The default behavior is to not do anything.
This value is one or more of the following flags, OR'd together (these are sub constants, exported to your namespace).
This instructs the cleanup handler to block and loop for this batch, as Proc::Terminator does in the synchronous API. The maximum time each batch can block is 5 seconds (though this might be configurable).
If
SIGKILL was not in the original list of signals to send to the process, then push it to the end of the signal-to-send stack. This only makes sense with the
PROCTERMf_CLEANUP_BLOCK
Blindly sends a
SIGKILL to the remaining processes. It does not make sense to use this flag with the other flags.
When POE shuts down and the component is about to stop, it will call an iteration of the loop (hopefully killing it). In the future I might want to do something like either invoke a 'real' Proc::Terminate session (with a reasonable time limit) or nuke them all with
SIGKILL. Dunno?
You may use and distribute this software under the same terms and conditions as Perl itself. | http://search.cpan.org/dist/POE-Component-ProcTerminator/lib/POE/Component/ProcTerminator.pm | CC-MAIN-2014-23 | refinedweb | 487 | 53 |
OutputWrapper class to Commands which outputs to stream object of choice
Review Request #11401 — Created Jan. 23, 2021 and updated
RBTools commands were previously using
sys.stderr
to output messages. This change abstracting outputting by using a
wrapper around a stream output object. This makes it easier to customize
output streams and suppress output. 4 wrappers (2 for standard output
unicode and byte, 2 for standard error unicode and byte) are initiated
in the Command class that is accessible to all child classes.
Ran all tests in
./tests/runtests.py rbtools.commandsand passed.
Added two new tests for
__init__.py. One that tests if output stream
object is set correctly for
OutputWrapperand another test that makes
sure
OutputWrapperpasses the correct message to the output stream object
Change Summary:
Fixing Review Bot issues and multiline string formatting
Checks run (1 failed, 1 succeeded)
flake8
Change Summary:
Added new test for Outputwrapper's new_line() that ensures output stream receives new line character
Checks run (1 failed, 1 succeeded)
flake8
Change Summary:
Replaced self.stdout.write("\n") with self.stdout.new_line()
Checks run (1 failed, 1 succeeded)
flake8
Checks run (1 failed, 1 succeeded)
flake8
All test cases should be in files called
test_X, not the
__init__.py. In this case I think you could probably add your new test class to the existing
test_main.pyfile.
This class should also probably be called
OutputWrapperTests.
A few changes in here:
- The file needs a module docstring
- The first thing after that docstring should be
from __future__ import unicode_literals
- Imports should be structured in three groups: standard library, third-party libraries, and then rbtools. In each group we alphabetize the imports. So these should look like:
import sys from kgb import SpyAgency from rbtools.commands import OutputWrapper from rbtools.utils.testbase import RBTestBase
Dedent this three spaces.
Typo: strea -> stream
Can we call these
_bytesinstead of
_byte?
Let's add another newline in here, before the format operation. Otherwise it's sometimes easy to miss when scanning quickly through the code.
self.stdout.write('Please log in to the Review Board server at ' '%s.' % urlparse(uri)[1])
The second line here needs to be indented a bit more:
% (review_request.submitter.fullname or review_request.submitter.username))
When we wrap a string, we put the space at the end of the first line. Let's also put the format parameters on their own line:
self.stdout.write('Recursively landing dependencies of ' 'review request %s.' % review_request_id)
Please undo the changes here (so the space is at the end of the first line).
One thing Christian just pointed out to me is that he's been moving to just
import kgb, and then using
kgb.SpyAgencywhere necessary. This makes it easier when we want to add things later like spy operations.
Please make sure this is sorted alphabetically.
We still need two blank lines here.
This isn't a good description. Let's say "Unit tests for command OutputWrapper"
Docstrings need to be in the following form:
"""Single-line summary. Multi-line description. """
Along with this, for any newly-introduced classes (or new functions/attributes on existing classes), we want to include version information directly below the description, like so:
"""Single-line summary. Multi-line description. Version Added: 3.0 """
Function docstrings must follow the same form, and specify any arguments, return values, and exceptions raised. Take a look at the docs for this on Notion.
We do have a lot of old docstrings in RBTools that don't conform to the modern standard, but if you grep around for "Args:" or "Returns:", you'll see examples.
This comment applies throughout the change.
We expect a blank line between the end of a block and a new statememt, to help separate that out as a new chunk of code.
There's some typos in these docstrings ("charater" here, "specifiy" in the class docstring). If your editor supports it, I recommend running a spell check.
This docstring is old, and needs to be modernized. As a step toward this, since you're introducing some really important new attributes, let's begin documenting them.
To do this, add an
Attributessection at the end of this docstring:
"""... ... Attributes: stderr (OutputWrapper): Standard error output wrapper that subclasses must write to. stdout (OutputWrapper): ... ...
Attributes should be in alphabetical order.
There's an inconsistency in variable names. You have
stderr_bytes(plural) and
stderr_byte(singular).
The plural form is what we want, I think.
Also, this if/else always sets both, so we have no reason to default to
Noneabove.
Here's an area where the new output wrappers solve a problem!
Not all diffs can be decoded as UTF-8, so we want to write as a byte string. This code here was using
sys.stdout.buffer.writeto let us do that on Python 3, and just
print()on Python 2.
The correct thing will be to replace this entire chunk of logic with a write to
self.stdout_bytes.
Make sure, along with replacing
sys.stdoutor
sys.stderr.
The string is a parameter to
textwrap.fill(), rather than a second parameter to
stdout.write. Since we're wrapping to the following line to give us room for the text, we need to keep it 4 spaces indented from the start of the main statement. In order words, what we had before was correct.
To keep lint checkers from complaining about indentations being multiples of 4, this should actually be in the same form we had before. The parameters indent to 4 spaces relative to the statement, allowing dictionary keys to indent 4 spaces from that. So you can revert these indentation changes, and keep the string on the second line.
This should be a single import statement.
This is a class docstring, so it needs to have a trailing period.
All unit test docstrings should be in the form of
Testing <Thing> <condition>.
For instance,
Testing OutputWrapper initializes stream
Testing OutputWrapper.write with ...
Testing OutputWrapper.new_line
etc. | https://reviews.reviewboard.org/r/11401/ | CC-MAIN-2021-21 | refinedweb | 992 | 66.33 |
Install - Build and Installation guide for perl5. you have problems, corrections, or questions, please see "Reporting Problems" below.
For information on what's new in this release, see the pod/perldelta.pod file. For more detailed information about specific.
If you find that your C compiler is not ANSI-capable, try obtaining GCC, available from GNU mirrors worldwide (e.g.)..com to let us know the steps you followed. This will enable us to officially support this option.
Although Perl can be compiled using a C++ compiler, the Configure script does not work with some C++ compilers.
The complete perl5 source tree takes up about 20 MB of disk space. After completing make, it takes up roughly 30 MB, though the actual total is likely to be quite system-dependent. The installation directories need something on the order of 20 MB, though again that value is system-dependent.
Configure.
If you are willing to accept all the defaults, and you want terse output, you can run
sh Configure -des
For my Solaris system, I usually use
sh Configure -Dprefix=/opt/perl -Doptimize='-xpentium -xO4' -des set to $prefix/site_perl if Configure detects that you have 5.004-era modules installed there. However, you can set it to anything you like..
There are several different ways to Configure and build perl for your system. For most users, the defaults are sensible and will work. Some users, however, may wish to further customize perl. Here are some of the main things you can change..
This option requires the 'sfio' package to have been built and installed. A (fairly old) version of sfio is in CPAN..
There also might be a more recent release of Sfio that fixes your problem..
By default, Configure will compile perl to use dynamic loading if your system supports it. If you want to force perl to be compiled statically, you can either choose this when Configure prompts you or you can use the Configure command line option -Uusedl.".
By default, Configure will offer to build every extension which appears to be supported. For example, Configure will offer to build GDBM_File only if it is able to find the gdbm library. (See examples below.) B, DynaLoader, Fcntl, IO, and attrs:
B (Always included by default) Threads use5005threads attrs (Always included by default) Unix systems do) remember that these extensions do not increase the size of your perl executable, nor do they impact start-up time, so you probably might as well build all the ones that will work on your system..
If you make any changes to config.sh, you should propagate them to all the .SH files by running
sh Configure -S
You will then have to rebuild by running
make depend make
You can also supply a shell script config.over to over-ride Configure's guesses. It will get loaded up at the very end, just before config.sh is created. You have to be careful with this, however, as Configure does no checking that your changes make sense..
Configure uses a CONFIG variable that is reported to cause trouble on ReliantUnix 5.44. If your system sets this variable, you can try unsetting it before you run Configure. Configure should eventually be fixed to avoid polluting the namespace of the environment., you should.
Alternatively, recent versions of GNU ld reportedly work if you include
-Wl,-export-dynamic in the ccdlflags variable in config.sh.
If you get this message on SunOS or Solaris, and you're using gcc, it's probably the GNU as or GNU ld problem in the previous item "Solaris and SunOS dynamic loading". you still can't compile successfully, try:
sh Configure -Accflags=-DCRIPPLED_CC
This flag simplifies some complicated expressions for compilers that get indigestion easily. (Just because you get no errors doesn't mean it compiled right!)..
$Id: INSTALL,v 1.58 1999/07/23 14:43:00 doughera Exp $ | http://search.cpan.org/~gsar/perl-5.6.0/INSTALL | CC-MAIN-2014-41 | refinedweb | 651 | 65.42 |
HoloLens CubeBouncer Application: Part 3
In the third part of his series about HoloLens, Joost van Shaik shows how to employ air tap on cubes on an arranged grid, move them around, and control direction.
Join the DZone community and get the full member experience.Join For Free
In the previous post, I showed you how to create a neatly arranged grid aligned to your view in your HoloLens app. Now, it’s time to make a mess of it – I am going to show you how to employ air tap on the cubes to move the them around, and gaze to determine which way they go. Plus, we are going to add some sound to them – when they bounce against each other, and against the wall.
Steal Some Sounds
First of all, we need two short sound clips. One for two cubes hitting each other, one for a cube hitting a wall. Any clip will do, as long as it’s short. I took two from the “Free Casual Sounds SFX Pack” that you can find in the Unity Asset Store (Click Windows/Assets Store or hit CTRL+9)
Beware: Importing directly from the store can get you more than you have bargained for, bloating your project with unnecessary stuff. I tend to create a new Unity3d project and import the package there first, to see what happens. And in this case I just cherry picked two sounds, that I called BounceCubel.wav and BounceBall.wav. Then I dragged into the Assets/Audio folder in Unity, as displayed here to the left.
Now it’s time for coding again. That is to say...
Steal Some Scripts
Writing code is awesome – not having to write code is even better. In the HoloLens toolkit, there’s a script that almost does what we want. It’s called “GestureManager” and it’s in Assets\HoloToolkit\Input\Scripts. It’s a script that handles the air tap and sends a kind of message to a selected object. That is almost what we want. We need to copy and adapt it a little.
The next few steps are best done when Unity3d is not running, as it starts to parse scripts as soon as they appear in the folder:
- Copy Assets\HoloToolkit\Input\Scripts\GestureManager.cs to. Assets\Custom\Scripts
- Rename the file to RaySendingGestureManager.cs
- Open RaySendingGestureManager.cs and change the class name to RaySendingGestureManager
- Change “Singleton<GestureManager>” into “Singleton<RaySendingGestureManager>”
- Remove the namespace namespace HoloToolkit.Unity
- Add “using HoloToolkit.Unity” on top.
- Find the method GestureRecognizer_TappedEvent
- Change it to the following:
private void GestureRecognizer_TappedEvent( InteractionSourceKind source, int tapCount, Ray headRay) { if (focusedObject != null) { focusedObject.SendMessage("OnSelect", headRay); } }
Basically, the only thing you do, is pass on the headRay “Ray” as extra parameter to the focusedObject.SendMessage method. This means that Unity3d should search for a Script component in the selected GameObject, find a method “OnSelect” in there with a a single parameter of type object — and invoke that. This feels a bit Javascripty, only even less typed, and there is no punishment, either: If the method is not found, nothing happens. No error – just nothing. It’s like sending a message – if no one is listening, we don’t care. Talk about loose coupling. You can’t get much looser than this.
Why do we need to send the headRay? Because we want the selected object to know from which direction it’s being looked at when you tap. If you want to know the details of air tap, have a look at the rest of the script. For now, we should be content that there is a method called on the selected object – i.e. the WortellCube – when there is an air tap while the cursor is on it.
One more thing – we need to apply the script. Select the HologramCollection/Managers object in the Hierarchy pane, then hit the “Add Component” button, Type “Ray Sending” in the search box and add the Ray Sending Manager component. Net result should be this:
Message Sent: Now Catch It!
Okay, so whenever someone air taps while their gaze is at ‘something’, Unity3D tries to call on “OnSelect” method. Let’s make sure that if that something is a WortellCube, it listens to it. Start as follows:
- Select the “Assets/Custom/Scripts folder
- Right click, hit Create/C# script
- Call it “CubeManipulator”
- Select the WortellCube Prefab
- Drag the new script on top Inspector tab, below the “Add Component” button.
- Hit File/Build Settings/Build and build the solution
- Open the Visual Studio solution (or reload it)
Open the CubeBouncer.cs script, and change it so it looks like this:
using HoloToolkit.Unity; using System.Collections; using UnityEngine; public class CubeManipulator : MonoBehaviour { private Rigidbody _rigidBody; public int ForceMultiplier = 100; // Use this for initialization void Start() { _rigidBody = GetComponent<Rigidbody>(); } public void OnSelect(object ray) { if (!(ray is Ray)) return; var rayData = (Ray)ray; _rigidBody.AddForceAtPosition( new Vector3( rayData.direction.x * ForceMultiplier, rayData.direction.y * ForceMultiplier, rayData.direction.z * ForceMultiplier), GazeManager.Instance.Position); } } }
Since the script is now part of a compound object, I can easily get a reference to other components in that objects. In this case, we can get a reference to the RigidBody component. For performance reasons (and because I find it more tidy looking), we store that in a private field that is initialized in the Start method.
And then there’s the OnSelect method. After first checking if we indeed get a Ray supplied in the ray parameter, we simply call the RigidBody’s AddForceAtPosition with a Vector3D made out of the direction of the Ray, times 100, and the position where the cursor is now. And that’s it. Unity gives the Cube a push in the direction you are looking on the place where your gaze is locked on the cube. And off it goes, spinning into the void, bouncing off other cubes and walls. Notice the cubes slowly slow down by themselves. That is all caused by the physical characteristics – the bouncyness of the Physic Material, the weight and drag of the Rigid Body. Having written software that calculates bouncing at angles myself, it almost feels like cheating. But this is the awesome power of a fleshed out physics engine.
To see this stage you actually have to rebuild the project from Unity first again or you will get an “[Position out of bounds!]” error when you try to run the app on a HoloLens or the emulator. This is because we have added a public field again, which translates into a property that can be filled by Unity. Every time you add public field to a script you will have to rebuild the project from Unity.
There is a still a thing missing. The cubes bounces off each other in dead silence. Time to fix that, and make the experience more immersive.
Let’s Make Some Noise!
First, we need to add three public and one private field to the CubeManipulator script:
private AudioSource _audioSource; public int Id; public AudioClip BounceTogetherClip; public AudioClip BounceOtherClip;
_audioSource gets initialized in Start as well:
void Start() { _rigidBody = GetComponent<Rigidbody>(); _audioSource = GetComponent<AudioSource>(); }
And then we have to act on a collision. That is just another simple private method that is automatically called by Unity. And wouldn’t you know it, it’s called “OnCollision” – duh. So let’s add that to the script:
void OnCollisionEnter(Collision coll) { // Ignore hits by cursors if (coll.gameObject.GetComponent<CursorManager>() != null) return; // Play a click on hitting another cube, but only if the it has a higher Id // to prevent the same sound being played twice var othercube = coll.gameObject.GetComponent<CubeManipulator>(); if (othercube != null && othercube.Id < Id) { _audioSource.PlayOneShot(BounceTogetherClip); } // No cursor, no cube - we hit a wall. if (othercube == null) { if (coll.relativeVelocity.magnitude > 0.1) { _audioSource.PlayOneShot(BounceOtherClip); } } }
So if a cube collides with a GameObject that has a CursorManager – it’s a cursor, so ignore it. But if it has a CubeManipulator - it’s a cube too! Then play the BounceTogetherClip AudioClip. But only if the Id of this cube is lower than the cube we hit – to prevent double sounds. If it was not a cursor or another cube, it was most likely a wall – so play the BounceOtherClip, but only if the cube hit the wall "hard enough." You do this by checking the collision magnitude. I took a quite arbitrary value. Once again – it feels like cheating. Unity takes care of mostly everything.
Aside – don’t try to be smart and change “if (othercube != null && othercube.Id < Id)” into
“if (othercube?.Id < Id)”. It will compile, it will run, and it will also mess up Unity3d. Avoid C# 6 constructs.
We only need to do a couple of things more to not only see but also hear our app at work. too. Save the script, then go back to Unity. Open the WortellCube prefab again, scroll down to the CubeManipulator script component. You will see – like expected – the script has three extra properties now: Id, Bounce Together Clip and Bounce Other Clip. So drag the Audio asset “BounceCube” on top of the Bounce Together Clip field, and BounceWall on top of Bounce Other Clips. Net result should be this:
Build the project again, and go back to Visual Studio. There is that tiny thing about the Id we have to solve. Because now we have an Id, but nothing is set yet. Remember that in the previous post, the method CreateCube in MainStarter.cs had an id parameter that was not used? Well, now we are going to use it. We change that method a little by adding two lines:
private void CreateCube(int id, Vector3 location, Quaternion rotation) { var c = Instantiate(Cube, location, rotation) as GameObject; //Rotate around it's own up axis so up points TO the camera c.transform.RotateAround(location, transform.up, 180f); var m = c.GetComponent<CubeManipulator>(); m.Id = id; }
And thus we assign the Id we need to make only one cube play the bounce together sound when two cubes hit. Deploy the project to a HoloLens or an emulator, and the cubes click when they bounce together, or emit a kind of boom when they hit the wall:
Also, if you have an actual HoloLens, try turning your head after launching some cubes – you will hear the sound actually coming from the right direction. This is because we have included an AudioSource component configured for Spatial Sound in the WortellCube prefab - and that travels along with the cube. Once again – it feels like cheating, but it works.
Concluding Remarks
We have learned to intercept air taps, and apply directed force to a rigid body – making cubes all bounce by themselves (or actually, by Unity. We have also learned to act on collisions, and play spatial sound. Still by writing very, very little code. Next time – speech command and moving stuff yourself!
Published at DZone with permission of Joost van Schaik, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own. | https://dzone.com/articles/hololens-cubebouncer-application-part-3-air-tappin | CC-MAIN-2022-21 | refinedweb | 1,844 | 65.32 |
From: John Maddock (John_Maddock_at_[hidden])
Date: 2000-06-24 05:37:53
Paul,
><quote>
I've included a workaround for the mingw lack of <limits> in
rational_example.cpp, but mingw still breaks with
boost\rational.hpp: In function `class boost::rational<int>
boost::abs<int>(const boost::rational<int> &)':
rational_example.cpp:65: instantiated from here
boost\rational.hpp:311: no matching function for call to `abs (int)'
</quote>
<
gcc has a problem with using declarations at function scope: it just
ignores them! Put the using declaration at namespace scope (for gcc only)
and it should work.
- John.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2000/06/3650.php | CC-MAIN-2019-18 | refinedweb | 119 | 53.68 |
Expressions (C# Programming Guide)
An expression is a sequence of one or more operands and zero or more operators that can be evaluated to a single value, object, method, or namespace. Expressions can consist of a literal value, a method invocation, an operator and its operands, or a simple name. Simple names can be the name of a variable, type member, method parameter, namespace or type.
Expressions can use operators that in turn use other expressions as parameters, or method calls whose parameters are in turn other method calls, so expressions can range from simple to very complex. Following are two examples of expressions:
In most of the contexts in which expressions are used, for example in statements or method parameters, the expression is expected to evaluate to some value. If x and y are integers, the expression x + y evaluates to a numeric value. The expression new MyClass() evaluates to a reference to a new instance of a MyClass object. The expression myClass.ToString() evaluates to a string because that is the return type of the method. However, although a namespace name is classified as an expression, it does not evaluate to a value and therefore can never be the final result of any expression. You cannot pass a namespace name to a method parameter, or use it in a new expression, or assign it to a variable. You can only use it as a sub-expression in a larger expression. The same is true for types (as distinct from System.Type objects), method group names (as distinct from specific methods), and event add and remove accessors.
Every value has an associated type. For example, if x and y are both variables of type int, the value of the expression x + y is also typed as int. If the value is assigned to a variable of a different type, or if x and y are different types, the rules of type conversion are applied. For more information about how such conversions work, see Casting and Type Conversions (C# Programming Guide).
Numeric expressions may cause overflows if the value is larger than the maximum value of the value's type. For more information, see Checked and Unchecked (C# Reference) and Explicit Numeric Conversions Table (C# Reference).
The manner in which an expression is evaluated is governed by the rules of associativity and operator precedence. For more information, see Operators (C# Programming Guide).
Most expressions, except assignment expressions and method invocation expressions, must be embedded in a statement. For more information, see Statements (C# Programming Guide).
The two simplest types of expressions are literals and simple names. A literal is a constant value that has no name. For example, in the following code example, both 5 and "Hello World" are literal values:
For more information on literals, see Types (C# Reference).
In the preceding example, both i and s are simple names that identify local variables. When those variables are used in an expression, the variable name evaluates to the value that is currently stored in the variable's location in memory. This is shown in the following example:
In the following code example, the call to DoWork is an invocation expression.
A method invocation requires the name of the method, either as a name as in the previous example, or as the result of another expression, followed by parenthesis and any method parameters. For more information, see Methods (C# Programming Guide). A delegate invocation uses the name of a delegate and method parameters in parenthesis. For more information, see Delegates (C# Programming Guide). Method invocations and delegate invocations evaluate to the return value of the method, if the method returns a value. Methods that return void cannot be used in place of a value in an expression.
The same rules for expressions in general apply to query expressions. For more information, see LINQ Query Expressions (C# Programming Guide).
Lambda expressions represent "inline methods" that have no name but can have input parameters and multiple statements. They are used extensively in LINQ to pass arguments to methods. Lambda expressions are compiled to either delegates or expression trees depending on the context in which they are used. For more information, see Lambda Expressions (C# Programming Guide).
Expression trees enable expressions to be represented as data structures. They are used extensively by LINQ providers to translate query expressions into code that is meaningful in some other context, such as a SQL database. For more information, see Expression Trees (C# and Visual Basic).
Whenever a variable, object property, or object indexer access is identified from an expression, the value of that item is used as the value of the expression. An expression can be placed anywhere in C# where a value or object is required, as long as the expression ultimately evaluates to the required type. | http://msdn.microsoft.com/en-us/library/ms173144(v=vs.100).aspx | CC-MAIN-2014-52 | refinedweb | 802 | 54.22 |
Body Biasing Injection experiments
Published on 05 October 2021
Contents
Several months ago, I stumbled upon this paper from Colin O’Flynn. Since I never heard about this technique, I thought it could be interesting to try it by myself and see how it performs.
Body Biasing Injection
Body Biasing Injection has first been presented in this paper from P. Maurine et al. The rough idea is to inject a high voltage pulse directly on the backside of the silicon of the target chip in order to generate faults.
The induced voltage bias will modify the threshold voltage of the underlying transistors and can potentially prevent some bits to be correctly forwarded through the CPU circuitry.
The main advantage of this technique compared to voltage glitching for instance is that it is more localized and can only perturbate a specific region of the target chip.
Several other papers are covering this technique:
- Yet Another Fault Injection Technique : by Forward Body Biasing Injection by P. Maurine et al.
- Body Biasing Injection Attacks in Practice by Noemie Beringuier-Boher et al.
Before we dig deeper, let's review the basics about chips. The die is a silicon substrate where several layers of metal are applied in order to create a circuit containing billions of transistors. The following picture from Wikipedia shows the different layers in a die:
When talking about different injection techniques, the terms frontside and backside are often used. Contrarily to the schematic above:
- The frontside is the metal side of the die. It's the one connecting the die pad to the actual chip package pins
- The backside is the silicon side of the die. It is basically the other side of the silicon substrate.
Building the injector
In all of these papers, a high voltage generator was used and coupled with a pulse generator in order to generate the pulses. This kind of equipment is generally quite complicated to use, so Colin O’Flynn took a different approach by using a transformer in order to generate a high voltage spike.
The MOSFET Q1 is used to quickly drop the voltage across the primary of the transformer. This will generate a voltage spike on the secondary coil, proportional to the number of turns ratio between the primary and the secondary of the transformer.
I built my own version of the BBI injection by using scrape parts I found around my desk.
- The two capacitors have been replaced by a single 100μF capacitor.
- The MOSFET I used is a IRF7470
- The transformer is a 6 turns inductance I recycled from a dead motherboard. I wound 60 turns of 0.2mm wire to make the secondary.
- The injection probe is a simple spring loaded pogo pin. This will be important later on.
In the end, the injection probe looks like this:
Testing the probe
As the turn ratio on the transformer is 1:10, a 1V input should generate a 10V spike on the secondary. Hooking up the oscilloscope directly on the output could be dangerous with standard probes, as the voltage could kill the input. I used a 100:1 passive probe that is able to withstand up to 1200V in order to be able to take measurements.
The width of the MOSFET gate pulse changes the shape and height of the pulse, so I tested various gate pulse widths and fixed the pulse width to 1.5μs, which provides the highest voltage spike.
Here is an example output pulse using 10V input voltage and 1500ns gate pulse:
and with 1V input voltage and 1500ns gate pulse:
Interestingly, the assumed turn ratio of 1:10 does not look correct at all (thanks to the high voltage probe, my oscilloscope is still safe). I am not sure about what is happening here, but my explanation is that the different coil wire diameters plays a role. The approximate ratio is measured around 1:50 instead of 1:10.
Update 10/27/2021 As mentioned by @WestonBraun:
You are getting a much higher voltage from your pulse generator because the transformer is not clamped, it's acting like a flyback transformer In your waveforms there is a negative pulse closer to what you would expect before the HV pulse.
Thanks to him for that information.
Bench preparation
Preparing the target
As a test target, I took a STM32F103 Nucleo devkit I had lying around. The chip is a STM32F103RBT6 which is quite common.
As BBI requires physical access to the backside, we need to grind through the plastic case of the chip in order to reach the silicon. However, there's a catch here: The silicon side of the die is located on the back of the chip package. We therefore not only need to grind the back side of the chip, but also dig a hole in the PCB in order to place the injection probe.
Looking at the schematics for the devkit shows that the PCB under the chip does not contain any important signals but only some GPIO traces to the headers, it is therefore possible to drill through the PCB and only loose some GPIOs. After that, grinding the chip is a matter of patience and precision. Note that you can grind the silicon a bit without harming the chip.
Once done, the board looks like this (the image shows a probe placed on the silicon as well).
One interesting thing to notice is that this chip's die has a copper pad glued to the silicon. This pad can be removed using a scalpel to avoid breaking the die. You can see that the left part of the silicon looks smooth, this is because I was able to tear apart the last part of the copper pad that was glued here.
Once the device has been prepared, we can verify that there is a resistance value between the silicon and the ground. By using a multimeter, I could measure this resistance to be between 90kΩ and 240kΩ depending on the measurement point on the die.
Preparing the XYZ table
In order to move the probe accurately across the die, a XYZ table is required. However, it is possible to use a 3D printer to get the same kind of results. I soldered the pogo pin to a piece of prototype board and a SMA connector and fixed this small PCB to the 3D printer head using a 3D-printed part.
My friend Azox prepared a Python library that can be used to drive the printer and scan a whole area using a few Python commands:
from glitch3d import printer from glitch3d import chip # Prepare the printer and set its zero position ender = printer(port="/dev/ttyUSB5",baudrate=115200, timeout=1) ender.load_settings("glitch3d/settings/ender3.ini") ender.set_pos(0,0,10) ender.go_home_xyz() # Prepare the area to scan with 0.2mm steps target = chip() target.set_home(166,76.5) target.set_end(168.7,79.8) target.steps=0.2 #Define Z axis positions in order to move the probe UP = 76 DOWN = 73 # Run the scan. Try to inject a glitch for every position for position in target.vertical(): x,y = position ender.set_pos(x,y,UP) ender.set_pos(x,y,DOWN) bbi.run() ender.set_pos(x,y,UP)
Target reset
As the target might be stuck in an unstable state, I used a P-MOSFET between the 5V power supply and the target. This MOSFET is driven by the Hydrabus to reset the target after each attempt to attempt the fault in a clean fresh state.
Final setup
Once everything is connected together, this huge mess of wires and probes is ready to be run.
Characterization
Test firmware
In order to detect faults and their effects, I prepared a very simple firmware in assembly that will trigger different debug breakpoints depending on the fault effect:
.thumb @ Variables .equ RAM, (0x20000000) .equ RAM_END, (0x20010000) @ Vector table start .long 0x20001000 @SP value .long _start @Reset .long _nmi @NMI interrupt .long _hardfault @Hard fault .long _memfault @Memory fault .long _busfault @Bus fault .long _usagefault @Usage fault .long 0x00000000 @Reserved .long 0x00000000 @Reserved .long 0x00000000 @Reserved .long 0x00000000 @Reserved .long 0x00000000 @SvCall .long 0x00000000 @Debug .long 0x00000000 @Reserved .long 0x00000000 @PendSV .long 0x00000000 @Systick @ Vector table end .thumb_func _start: MOV R2, #0 LDR R3, =(RAM) STR R2, [R3] .thumb_func _loop: ADD R2, R2, #1 LDR R1, [R3] ADD R1, R1, #1 STR R1, [R3] CMP R1, R2 BEQ _loop bkpt 0xb0 b _start .global _start _nmi: bkpt 0xa0 b _nmi _hardfault: bkpt 0xa1 b _hardfault _memfault: bkpt 0xa2 b _memfault _busfault: bkpt 0xa3 b _busfault _usagefault: bkpt 0xa4 b _usagefault
This firmware is a simple infinite loop that updates a register and a variable stored in SRAM synchronously. Both values are compared and if they differ somehow, a software breakpoint is raised.
I then used a Hydrabus as a SWD probe connected to the target in order to detect when the CPU stops its execution, retrieve the register status and the breakpoint code in order to know the kind of effect the fault triggered.
These helper functions were used to control and retrieve the status of the target CPU:
import pyHydrabus s = pyHydrabus.SWD() def init_swd(): s.bus_init() s.read_dp(0) s.write_dp(4, 0x50000000) CSW = s.read_ap(0, 0) s.write_ap(0,0,CSW|0b10) def halt_cpu(): s.write_ap(0, 0x4, 0xE000EDF0) # DHCSR s.write_ap(0, 0xc, 0xA05F0003) def run_cpu(): s.write_ap(0, 0x4, 0xE000EDF0) # DHCSR s.write_ap(0, 0xc, 0xA05F0001) def reset_cpu(): s.write_ap(0, 0x4, 0xE000ED0C) s.write_ap(0, 0xc, 0x05FA0004) def read_mem(address): s.write_ap(0, 0x4, address) return s.read_ap(0, 0xc) def read_register(regnum): s.write_ap(0, 0x4, 0xE000EDF4) # DCRSR s.write_ap(0, 0xc, regnum) s.write_ap(0, 0x4, 0xE000EDF8) # DCRDR return s.read_ap(0, 0xc) def write_register(regnum, value): s.write_ap(0, 0x4, 0xE000EDF8) # DCRDR s.write_ap(0, 0xc, value) s.write_ap(0, 0x4, 0xE000EDF4) # DCRSR s.write_ap(0, 0xc, regnum|1<<16) def is_running(): s.write_ap(0, 0x4, 0xE000EDF0) DHCSR = s.read_ap(0,0xc) return (DHCSR&0x20000) == 0
The most important part is in the run_cpu() function, where the C_DEBUGEN bit is set in the DHCSR register. That way the CPU will correctly halt at a bkpt instruction.
First scan
For each location on the die, two faults where generated per input voltage, and all the results where graphed based on the location and the type of fault. In this first scan, I used 0.2mm steps roughly across the center of the chip so I missed some regions on the edges of the die.
For instance, with 1V of input, the following faults were generated:
On the opposite, with 10V of input, way more faults are generated:
The labels are the following:
I performed the characterization with all voltages from 1 to 10 Volts in steps of 1V, and here are the results:
As we can see here, increasing the input voltage raises the chance that something bad happens. Looking back at the results, the highest successful glitch/voltage ratio is at 4 Volts, this is what we'll use for the next step.
Interestingly enough, no successful glitches were caused by a difference in the two register values. This either means there was an instruction skip or an instruction corruption.
RDP protection bypass
RDP is the flash readout protection for ST microcontrollers. Once set it prevents being able to read the flash memory from the bootloader or the debug interface. This protection has several levels depending on the MCU family. In this case, there is only one RDP level which prevents reading the flash but keeps the debug interfaces enabled.
We now know that the BBI can induce faults in the microcontroller. To make sure that this fault injection technique can be useful, let's try it on a more realistic scenario and try to reproduce the (in-)famous RDP bypass glitch but using BBI. This technique has been done several times using voltage glitch but as far as I know, never done using BBI.
The bypass vulnerability lies in the fact that the bootloader performs a software check in order to check if the flash content is locked or not, and a glitch can bypass this check:
After locking the flash using the ST utility or through the bootloader, the attack is quite simple:
- Send the Read memory command (0x11, 0xEE)
- Send the glitch when the RDP check is done
- Flash memory is sent back by the bootloader.
Now that we fixed the gate pulse width and the voltage, only two parameters are still to be discovered: the delay after the command and the physical location of the injection on the die.
Bench setup
To prepare the target, I fixed BOOT0 to 1 and BOOT1 to 0 using jumpers so the board always starts in bootloader mode. I then used the following functions to interact with the UART interface:
def read_ack(): ret = u.read(1) if ret == b'\x79': return True else: #print(f"Error {ret.hex()}") return False def target_init(): u.write(b'\x7f') i = 0 while not read_ack() and i < 3: i = i+1 def target_lock(): u.write(b'\x82\x7d') if not read_ack(): #print("Error sending read command") return False else: return True def target_program(): u.write(b'\x31\xce') if not read_ack(): print("Error sending write command") return None u.write(b'\x08\x00\x00\x00\x08') if not read_ack(): print("Error sending address") return None u.write(b'\x06SUCCESS\x45') if not read_ack(): print("Error sending data") return None def target_read(): u.write(b'\x11\xee') if not read_ack(): #print("Error sending read command") return None u.write(b'\x08\x00\x00\x00\x08') if not read_ack(): #print("Error sending address") return None u.write(b'\xff\x00') if not read_ack(): #print("Error sending size") return None return u.read(256)
Using these helper functions, I can program and lock the flash with the following commands:
target_reset() target_program() target_lock() target_read()
Glitch campaign
I then used the UART line to synchronize my FPGA with the Read memory command by counting the number of edges of the read memory command (0x11 0xEE). I can then start iterating through delays and die position:
for position in target.vertical(): x,y = position ender.set_pos(x,y,UP) ender.set_pos(x,y,DOWN) for delay in range(1200,2000,10): bbi.delay = delay for _ in range(5): target_reset() bbi.arm() status = target_read() if status is not None: print(f"fault @ {ender.get_pos()} with {delay} delay") print(status) oscillo_screenshot(f"{x}-{y}-{delay}.png") ender.set_pos(x,y,UP)
And after some time, I finally got positive results:
During the campaign, I also got several flash mass-erases which led to lots of false positives and a ruined night of sampling since the mass erase also clears the readout protection. I found out that the mass-erases happened around 3.5µs after the last rising edge of the UART line. The delay value (1790 for instance) are steps on the FPGA, which is clocked at 200MHz so about 8.95µs after the last rising edge of the UART line.
Note that the trigger is not perfectly aligned with the CPU execution, as there is several microseconds of jitter between the request and the response. This means that there could be some missing faults even if everything is correct. I could however get up to 3 faults at the same physical location out of 5 tries per location.
My script also takes oscilloscope screenshots of successful glitches. Here is one for reference:
Yellow line is the UART, blue is the BBI probe voltage.
Analyzing results
Here is the plot of all successful RDP bypasses locations on the die:
But now, what is exactly causing the fault ? We are unfortunately not able to access the debug interface when in bootloader mode, so how can we know which component is vulnerable here ? Fortunately for me, I was able to use some tools at work to thin and polish the die and take infrared pictures to see the basic blocks within the die. I then mapped the plot and the resulting picture and got this:
Even if the locations are not fully accurate, we can see that the flash memory (located on the lower right corner) is at fault. This means that we are faulting a flash read or register in order to bypass the RDP restriction. Interestingly enough, no CPU fault triggered the vulnerability (at least not with this input voltage).
Bootloader analysis
In order to analyze the results further, I dumped and analyzed the bootloader in order to see how the bootloader checks whether the RDP is enabled or not. After some time analyzing it, I located the following function:
┌ 14: fcn.1ffff132 (); │ 0x1ffff132 c049 ldr r1, [0x1ffff434] ; [0x1ffff434:4]=0x40022000 │ 0x1ffff134 0020 movs r0, 0 │ 0x1ffff136 c969 ldr r1, [r1, 0x1c] │ 0x1ffff138 8907 lsls r1, r1, 0x1e │ ┌─< 0x1ffff13a 00d5 bpl 0x1ffff13e │ │ 0x1ffff13c 0120 movs r0, 1 └ └─> 0x1ffff13e 7047 bx lr
The base register (0x40022000) is the flash controller base address. The code loads the register at address 0x4002201C and checks for the last two bits. Looking at the PM0075 document from ST shows that this is the FLASH_OBR register, and that the last two bits are:
- RDPRT - When set, this indicates that the Flash memory is read-protected
- OPTERR - When set, this indicates that the loaded option byte and its complement do not match.
All of this seems to make sense. The check is performed on a hardware flash controller register, this could explain why the successful faults are happening in the vicinity of the flash memory.
Conclusions
BBI is an interesting technique for sure. It offers the possibility to have a better spatial localization of the injected faults and provides a new way of injecting faults in a target at a reasonable cost.
However, this also comes with lots of drawbacks. The biggest one being the fact that the chip has to be prepared in order to access the silicon before being able to inject faults. The second drawback is the need for multiple tools and measurement probes in order to make sure that everything works the way it should. Adding the high voltage probes and the XYZ table into the bench setup is not an easy task and takes some time to setup correctly.
Compared to voltage glitching for instance, several other parameters have to be taken care of, such as the physical position of the injection probe and the input voltage. This makes the characterization way more challenging and time consuming.
It is nonetheless a different technique, and as such can be useful in some specific cases. The next step is to test BBI on devices protected by glitch detectors and see if it is possible to bypass this kind of protection. | https://www.balda.ch/posts/2021/Oct/05/bbi-experiments/ | CC-MAIN-2021-43 | refinedweb | 3,134 | 63.39 |
from IPython.display import Image
Machine learning is a scientific discipline that explores the construction and study of algorithms that can learn from data. Such algorithms operate by building a model from example inputs and using that to make predictions or decisions, rather than following strictly static program instructions.
We can take an example of predicting the type of flower based on the sepal length and width of the flower. Let's say we have some data (discretized iris data set on sepal length and width). The dataset looks something like this:
%run ../scripts/1/discretize.py data
150 rows × 3 columns
Now let's say we want to predict the type of flower for a new given data point. There are multiple ways to solve this problem. We will consider these two ways in some detail:
There are a lot of algorithms for finding a mapping function. For example linear regression tries to find a linear equation which explains the data. Support vector machine tries to find a plane which separates the data points. Decision Tree tries to find a set of simple greater than and less than equations to classify the data. Let's try to apply Decision Tree on this data set.
We can plot the data and it looks something like this:
%matplotlib inline import matplotlib.pyplot as plt import numpy as np # Adding a little bit of noise so that it's easier to visualize data_with_noise = data.iloc[:, :2] + np.random.normal(loc=0, scale=0.1, size=(150, 2)) plt.scatter(data_with_noise.length, data_with_noise.width, c=['b', 'g', 'r'], s=200, alpha=0.3)
<matplotlib.collections.PathCollection at 0x7f109313ca58>
In the plot we can easily see that the blue points are concentrated on the top-left corner, green ones in bottom left and red ones in top right.
Now let's try to train a Decision Tree on this data.
from sklearn.tree import DecisionTreeClassifier from sklearn.cross_validation import train_test_split X_train, X_test, y_train, y_test = train_test_split(data.ix[:, ['length', 'width']].values, data.type.values, test_size=0.2) classifier = DecisionTreeClassifier(max_depth=4) classifier.fit(X_train, y_train) classifier.predict(X_test)
array([1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1])
classifier.score(X_test, y_test)
0.56666666666666665
So, in this case we got a classification accuracy of 56.67 %.
Now moving on to our second approach using a probabilistic model. The most obvious way to do this classification task would be to compute a Joint Probability Distribution over all these variables and then marginalize and reduce over these according to our new data point to get the probabilities of classes.
X_train, X_test = data[:120], data[120:]
X_train
120 rows × 3 columns
# Computing the joint probability distribution over the training data joint_prob = data.groupby(['length', 'width', 'type']).size() / 120 joint_prob
length width type 4 2 0 0.008333 3 0 0.033333 5 2 1 0.033333 2 0.008333 3 0 0.200000 1 0.016667 4 0 0.133333 6 2 1 0.075000 2 0.025000 3 1 0.225000 2 0.200000 4 0 0.041667 7 2 2 0.008333 3 1 0.066667 2 0.116667 4 2 0.008333 8 3 2 0.033333 4 2 0.016667 dtype: float64
# Predicting values # Selecting just the feature variables. X_test_features = X_test.iloc[:, :2].values X_test_actual_results = X_test.iloc[:, 2].values predicted_values = [] for i in X_test_features: predicted_values.append(np.argmax(joint_prob[i[0], i[1]])) predicted_values = np.array(predicted_values) predicted_values
array([1, 1, 0, 2, 1, 1, 1, 0, 0, 1, 0, 1, 1, 2, 1, 0, 2, 2, 1, 2, 0, 1, 0, 2, 0, 2, 2, 0, 0, 0])
# Comparing results with the actual data. predicted_values == X_test_actual_results
array([False, False, True, True, True, False, False, True, True, False, True, True, True, True, False, True, True, True, True, False, True, True, True, True, True, True, True, True, True, True], dtype=bool)
score = (predicted_values == X_test_actual_results).sum() / 30 print(score)
0.766666666667
In the previous example we saw how Bayesian Inference works. We construct a Joint Distribution over the data and then condition on the observed variable to compute the posterior distribution. And then we query on this posterior distribution to predict the values of new data points.
But the problem with this method is that the Joint Probability Distribution is exponential to the number of states (cardinality) of each variable. So, for problems having a lot of features or having high cardinality of features, inference becomes a difficult task because of computational limitations. For example, for 10 random variables each having 10 states, the size of the Joint Distribution would be 10^10.
Proababilistic Graphical Models (PGM): PGM is a technique of compactly representing Joint Probability Distribution over random variables by exploiting the (conditional) independencies between the variables. PGM also provides us methods for efficiently doing inference over these joint distributions.
Each graphical model is characterized by a graph structure (can be directed, undirected or both) and a set of parameters associated with each graph.
The problem in the above example can be represented using a Bayesian Model (a type of graphical model) as:
Image(filename='../images/1/Iris_BN.png')
In this case the parameters of the network would be $ P(L) $, $ P(W) $ and $ P(T | L, W) $. So, we will need to store 5 values for $ L $, 3 values for $ W $ and 45 values for $ P(T | L, W) $. So, a total of 45 + 5 + 3 = 53 values to completely parameterize the network which is actually more than 45 values which we need for $ P (T, L, W) $. But in the cases of bigger networks graphical models help in saving space. We can take the example of the student network shown below:
Image(filename='../images/1/student.png')
Considering that $ D $ has cardinality of 2, $ I $ has cardinality of 2, $ S $ has cardinality of 2, $ G $ has cardinality of 3 and $ L $ has cardinality of 2. Also the parameters in this network would be $ P(D) $, $ P(I) $, $ P(S | I) $, $ P(G | D, I) $, $ P(L | G) $. So, the number of values needed would be $ 2 $ for $ P(D) $, $ 2 $ for $ P(I) $, $ 12 $ for $ P(G | D, I) $, $ 6 $ for $ P(L | G) $, $ 4 $ for $ P(S | I) $, total of $ 4 + 6 + 12 + 2 + 2 = 26 $ compared to $ 2 * 2 * 3 * 2 * 2 = 48 $ required for the Joint Distribution over all the variables.
There are mainly 2 types of graphical models: | https://nbviewer.jupyter.org/github/pgmpy/pgmpy_notebook/blob/master/notebooks/1.%20Introduction%20to%20Probabilistic%20Graphical%20Models.ipynb | CC-MAIN-2018-51 | refinedweb | 1,090 | 58.28 |
For every integer m > 1, it’s possible to choose N so that the proportion of primes in the sequence 1, 2, 3, … N is 1/m. To put it another way, you can make the odds against one of the first N natural numbers being prime any integer value you’d like [1].
For example, suppose you wanted to find N so that 1/7 of the first N positive integers are prime. Then the following Python code shows you could pick N = 3059.
from sympy import primepi m = 7 N = 2*m while N / primepi(N) != m: N += m print(N)
Related posts
[1] Solomon Golomb. On the Ratio of N to π(N). The American Mathematical Monthly, Vol. 69, No. 1 (Jan., 1962), pp. 36-37.
The proof is short, and doesn’t particularly depend on the distribution of primes. Golomb proves a more general theorem for any class of integers whose density goes to zero.
One thought on “Integer odds and prime numbers”
A faster version, using a binary search:
With m = 20 it takes about a minute to run on my not-exactly-new computer. It could be sped up more if we re-implemented the sieve algorithm used in primepi to take advantage of the fact that we are calculating it for multiple values and we know the approximate range of those values. | https://www.johndcook.com/blog/2018/10/17/integer-odds-and-prime-numbers/ | CC-MAIN-2022-27 | refinedweb | 229 | 70.13 |
From: David Abrahams (dave_at_[hidden])
Date: 2004-05-05 19:48:26
Jaakko Jarvi <jajarvi_at_[hidden]> writes:
> On May 5, 2004, at 1:26 PM, David Abrahams wrote:
>
>> Jaakko Jarvi <jajarvi_at_[hidden]> writes:
>>
>>> On May 4, 2004, at 7:56 PM, Fredrik Blomqvist wrote:
>>>
>>>> I think it would be convenient if one could treat std::pair as a
>>>> boost
>>>> two-tuple.
>>>> Both tuples::tie and tuple assignment already work with std::pair
>>>> but I've
>>>> noticed that the get<> accessor doesn't.
>>>>
>>>> So, I propose a simple extension to tuples::get<> with the obvious
>>>> semantics:
>>>> boost::get<0>(p) == p.first
>>>> boost::get<1>(p) == p.second
>>>>
>>>
>>> The trouble with get is that pairs live in namespace std, so if pair
>>> get's are not
>>> in std, they won't get found by ADL. But if we are willing to live
>>> with not having ADL,
>>> adding gets for pairs would be fine.
>>
>> Isn't there another problem? Are references legal arguments for
>> std::pair?
>>
>
> No, they aren't and thus we can't make pairs and 2-tuples have
> equivalent behavior in
> all cases. However, it seems that adding the get functions for pairs
> would not be
> problematic in this sense.
>
> OTOH, what Thorsten suggested, that 2-tuples would inherit from pairs,
> would
> not work.
tuple<int&,char> could be derived from
pair<reference_wrapper<int>,char>, no?
-- Dave Abrahams Boost Consulting
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2004/05/65037.php | CC-MAIN-2020-10 | refinedweb | 253 | 75.1 |
A required topic to get started
Anniversary Part Title Link
thinking
The relationship inside is a numeric acyclic graph, and there are obviously two choices for each node: to go to a party, not to go to a party, so the state transfer equation comes out
- To queue\(dp[i][1] = dp[i][1] + dp[i_{son}][0]\), the boss attended the party, and his direct reports could not attend the party
- Not attending a party\(dp[i][0] = dp[i][0] + max(dp[i_{son}][1], dp[i_{son}][0])\), the boss is not attending the party, and his immediate subordinates may or may not attend.
Code
#include<bits/stdc++.h> using namespace std; const int N = 1e4 + 10; int head[N], to[N], nex[N], in[N], cnt = 1; int dp[N][2], n; void add(int x, int y) { to[cnt] = y; nex[cnt] = head[x]; head[x] = cnt++; in[y]++; } void dfs(int rt, int fa) { for(int i = head[rt]; i; i = nex[i]) { if(to[i] != fa) { dfs(to[i], rt); dp[rt][1] += dp[to[i]][0]; dp[rt][0] += max(dp[to[i]][0], dp[to[i]][1]); } } } int main() { // freopen("in.txt", "r", stdin); while(scanf("%d", &n) != EOF) { cnt = 1; for(int i = 1; i <= n; i++) { scanf("%d", &dp[i][1]); head[i] = 0; dp[i][0] = 0; } int x, y; while(scanf("%d %d", &x, &y) && (x + y)) add(y, x); for(int i = 1; i <= n; i++) if(!in[i]) { dfs(i, 0); printf("%d\n", max(dp[i][0], dp[i][1])); } } return 0; }
One idea was right, but the details beat me up.
Rebuilding Roads Title Link
thinking
It's not difficult to think about it. An array of dp[i][j], the path that needs to be cut when the number of nodes connected to the ith node on the record plus the number of nodes connected to it on its subtree is obviously dp[i][1] =the subtree directly connected to it plus the path of a parent node.
Then the state transfer equation has \(dp[i][j] = min (dp[i][j], dp[i][k] + dp[i_{son}][j-k] - 2)\, which is subtracted by 2 because the two connected edges are recorded in the DP array of each other.
Now let's go into one detail. In our final answer, we subtract 1 from all of dp[root][] because we added the edges of its parent node before, but he has no parent node.It's a bit of a pothole, or a delicious dish.
Code
// #include<bits/stdc++.h> #include<iostream> #include<cstring> #include<algorithm> #include<cstdio> using namespace std; const int N = 2e2 + 10; int head[N], to[N], nex[N], in[N], cnt; int dp[N][N], sz[N], n, m; void add(int x, int y) { to[cnt] = y; nex[cnt] = head[x]; head[x] = cnt++; in[y]++; } void dfs1(int rt, int fa) {//Find the number of directly connected nodes and initialize the dp array. sz[rt] = 1; for(int i = head[rt]; i; i = nex[i]) { if(to[i] != fa) { dfs1(to[i], rt); sz[rt] ++; } } } void solve(int rt, int fa) { for(int i = head[rt]; i ; i = nex[i]) { if(to[i] != fa) { solve(to[i], rt); for (int j = m; j > 1; j--) for (int k = 1; k < j; k++) dp[rt][j] = min(dp[rt][j], dp[rt][j - k] + dp[to[i]][k] - 2); } } } int main() { // freopen("in.txt", "r", stdin); int x, y; while(scanf("%d %d", &n, &m) != EOF) { memset(head, 0, sizeof head); memset(dp, 0x3f, sizeof dp); cnt = 1; for(int i = 1; i < n; i++) { scanf("%d %d", &x, &y); add(x, y); } for(int i = 1; i <= n; i++) if(in[i] == 0) { // cout << i << endl; dfs1(i, 0); } for(int i = 1; i <= n; i++)//It seems that this can be done directly in dfs1,, dp[i][1] = sz[i]; // cout << sz[i] << endl; for(int i = 1; i <= n; i++) if(in[i] == 0) { solve(i, 0); dp[i][m]--; } // For(int I = 1; I <= n; i++)//debug bug_ing // for(int j = 1; j <= m; j++) { // printf("%d%c", dp[i][j], j == m ? '\n' : ' '); // } int ans = 0x3f3f3f3f; for(int i = 1; i <= n; i++) dp[i][m] < ans ? ans = dp[i][m] : ans = ans;//Finally, I got the answer. I wrote a long list of expressions that I don't understand right now,, printf("%d\n", ans); } return 0; }
A slightly more complex topic
Actually, it's quite understandable.
Optimal Connected Subset Title Links
thinking
This is a question of building your own map. Obviously, it's not difficult to imagine that if you build a connected edge between two points of dis = 1, the next thing to do is DP s on the tree.
Considering the state transfer equation, for each point we can put it in the union set or not, and at the same time we must ensure that each pair of points is connected to each other.
We use dp[i][0] to indicate that this point is not on the union set, and dp[i][1] to indicate that this point is on the union set.
Considering how dp[i][0] changes, it is not difficult to see that its value must be the maximum value on its subtree, so \(dp[i][0] = max(dp[i][0], dp[i_{son}][1], dp[i_{son}][0])\
Next we consider dp[i][1], and we want to ensure its connectivity by getting \(dp[i][1] = max(dp[i][1], dp[i_{son}][1] + dp[i][0])
Then follow this idea and run DFS once.
Code
// #include<bits/stdc++.h> #include<iostream> #include<algorithm> #include<cstdio> #include<cstring> using namespace std; const int N1 = 1e3 + 10, N2 = 1e6 + 10; int head[N1], to[N2], nex[N2], cnt; int dp[N1][2], n; struct point { int x, y; }a[N1]; void add(int x, int y) { to[cnt] = y; nex[cnt] = head[x]; head[x] = cnt++; } void dfs(int rt, int fa) { for(int i = head[rt]; i; i = nex[i]) { if(to[i] != fa) { // cout << 1 << endl; dfs(to[i], rt); dp[rt][1] = max(dp[rt][1], dp[rt][1] + dp[to[i]][1]); dp[rt][0] = max(dp[rt][0], max(dp[to[i]][0], dp[to[i]][1])); } } } int main() { // freopen("in.txt", "r", stdin); int x, y, w; while(scanf("%d", &n) != EOF) { for(int i = 1; i <= n; i++) dp[i][0] = dp[i][1] = head[i] = 0; cnt = 1; for(int i = 1; i <= n; i++) { scanf("%d %d %d", &a[i].x, &a[i].y, &w); dp[i][1] = w; } for(int i = 1; i <= n; i++) for(int j = i + 1; j <= n; j++) if((abs(a[i].x - a[j].x) + abs(a[i].y - a[j].y)) == 1) { add(i, j), add(j, i); // cout << 1 << endl; } dfs(1, 0); // for(int i = 1; i <= n; i++) // printf("%d %d\n", dp[i][0], dp[i][1]); printf("%d\n", max(dp[1][0], dp[1][1])); } return 0; }
There should have been a fourth question, and I didn't expect to look at the tree backpack, but I haven't learned it yet, so I started by croaking. | https://programmer.ink/think/a-simple-start-to-tree-dp.html | CC-MAIN-2021-39 | refinedweb | 1,229 | 68.03 |
I'm a software development engineer in Microsoft Office and have been working mostly on the RichEdit editor since 1994. In this blog I focus on mathematics in Office along with some posts on RichEdit and the early Windows days
What paragraphs are and how they are formatted are questions that continually come up both inside and outside of Microsoft. So this post describes Word/RichEdit paragraphs in general. A subsequent post will describe the “math paragraph”, which is part of a regular paragraph and is used for displayed equations, as distinguished from inline mathematical expressions.
The paragraph is a very important structure in written language. About six years ago, I developed the RichEdit binary format, which shipped with RichEdit 5.0 (Office 2003) as RichEdit’s preferred copy/paste format and was used by OneNote 2003 and 2007. In the design stage I talked with Eliyeser Kohen, of TrueType, OpenType, LineServices, and Page/Table Services fame. I was inclined to have four parallel streams: plain text, character formatting, paragraph formatting, and embedded objects, a format corresponding to the internal RichEdit representation. Eliyezer agreed such parallel streams were important, but insisted that they should be broken up into paragraphs. At the time, this seemed like extra overhead to me and I naturally didn’t want to slow things down. But I followed his advice and it’s right on! First, what’s a paragraph? Then what’s paragraph formatting? Then what’s a “soft” paragraph? And finally, what’s the final EOP?
From a natural language point of view, a paragraph is one or (preferably) more closely related sentences that naturally belong together without becoming too long. From the Word/RichEdit point of view, a paragraph is a string of text in any combination of scripts and inline objects, including possible “soft” line breaks and “math paragraphs”, with uniform “paragraph formatting” up to and including a carriage return. The carriage return (CR) is given by the Unicode character U+000D, which you insert by typing the Enter key. In plain text on PC’s, the paragraph is usually terminated by a CRLF (U+000D U+000A) combination, but not ordinarily inside a Word document or RichEdit instance. Just the CR is used.
<rant> It’s quite convenient to use a single character. It takes up less space than the CRLF and it’s easier to parse/manipulate, since it’s an atomic entity. In fact, Unix already used a single character, the line feed (LF—U+000A), back in 1972, several years before the PC operating systems were developed. Unfortunately, the PC with its DEC heritage preferred CRLF, a holdover from the old teletype days, and Word and the Mac shortened it to CR instead of LF. Windows NotePad still isn’t able to display Unix/Linux LF terminated paragraphs correctly after all these years (note that 2008 > 1972). I’m on a mission to fix that, but please don’t hold your breath! Anyhow I like CR better than LF, mostly because of habit. Clearly it would have been better to have a single standard. In this connection, it’s interesting to note that Word and RichEdit can handle CR, LF, and CRLF terminated paragraphs, even though they prefer CR. </ rant>
A key characteristic of a paragraph is its formatting, which is represented by a pretty large set of properties. Most of these properties are settable using a paragraph formatting dialog. In particular, there’s alignment (left, right, center, justify, along with a variety of East Asian options), space before/after, line spacing (single, double, multiple, at least, exactly), left/right margins and wrapped line indent, line/page breaks, tabs (oh, how I wish HTML had tab support!), and bullets/numbering. Internally, paragraphs and their formatting get overloaded with such entities as tables and drop caps, but let’s not get distracted. Using hot keys like Ctrl+E for centering or the paragraph formatting dialog, you can set the formatting for the paragraph(s) in which the current selection occurs. If you just have an insertion point (the blinking caret), only the paragraph containing the insertion point gets the new formatting.
When you create a numbered list, you may want to have an entry with one or more line breaks but no new number or bullet. To insert a line break without ending the paragraph, type Shift+Enter, which inserts a Vertical Tab (VT—U+000B). Even though you get a line break, you don’t end the current paragraph, so no new line number appears. All the paragraph properties remain the same with the new line and the space-before property doesn’t apply to the new line, since the line is inside the paragraph. Sometimes it’s handy to refer to a sequence of lines terminated by such a line break as a “soft paragraph”. In HTML, these “soft line breaks” are represented by the <BR> tag, whereas “hard” paragraphs are identified by the <P> tag.
Thinking of numbered entities, you might want to change the character formatting of the number or bullet out in front. For example, you might want to use a larger font size or a different font. To do this, change the appropriate character formatting of the CR that ends the paragraph.
To provide a place to attach paragraph formatting for the last paragraph, every Word document and every RichEdit rich-text instance has a “final EOP” (end of paragraph), represented by a CR (CRLF in RichEdit 1.0). You cannot delete the final EOP, nor can you move the insertion point past it. In the Word and RichEdit object models, the ranges can select up through the final EOP, but they cannot collapse to an insertion point that follows the final EOP. The farthest they can go is up to just before the final EOP. Similarly messages like EM_EXSETSEL cannot make the RichEdit selection go beyond the final EOP.
RichEdit also supports plain-text controls, which are characterized by uniform paragraph formatting and don’t need, or have, a final EOP. An empty plain-text control is really empty, whereas a rich-text control always has at least one character, the final EOP.
If you would like to receive an email when updates are made to this post, please register here
RSS
PingBack from
Yes, it would be great if HTML supported tabs. But it would be even better if Word/RichEdit supported a box model akin to the one HTML (with CSS) does. There's no inheritable hierarchy of styles, and margin/border/padding/background options are outdated and rudimentary at best. (E.G., Word can't overlap or even closely space in-line paragraphs, it can't do alpha channel transparency, and it can't use an image as a background to a paragraph.)
The earlier post Breaking Equations into Multiple Lines describes equation line breaking and alignment.
Great info regarding paragraphs. I never knew there were actual difference between <breaks> and paragraphs. Thanks. | http://blogs.msdn.com/murrays/archive/2008/11/22/paragraphs-and-paragraph-formatting.aspx | crawl-002 | refinedweb | 1,164 | 60.95 |
- 06 Jun, 2018 2 commits
- 23 Apr, 2018 1 commit
- Lin Jen-Shin authored
- 22 Apr, 2018 1 commit
- James Edwards-Jones authored
Moves LDAP to its own controller with tests Provides path forward for implementing GroupSaml
- 09 Feb, 2018 3 commits
[10.4] Fix GH namespace security issue
- Clement Ho.
- 07 Sep, 2017 1 commit
- Tiago Botelho authored
- 02 Aug, 2017 1 commit
- 21 Jun, 2017 1 commit
👮Grzegorz Bizon authored
- 24 May, 2017 3 commits
-
- 05 Apr, 2017 1 commit
- 30 Mar, 2017 1 commit
- Douglas Barbosa Alexandre authored
- 23 Feb, 2017 2 commits
This reverts commit cb10b725c8929b8b4460f89c9d96c773af39ba6b.
- 19 Dec, 2016 2 commits
- Rémy Coutable authored
Signed-off-by:
Rémy Coutable <[email protected]>
- Rémy Coutable authored
The reason is that Gitea plan to be GitHub-compatible so it makes sense to just modify GitHubImport a bit for now, and hopefully we can change it to GitHubishImport once Gitea is 100%-compatible. Signed-off-by:
Rémy Coutable <[email protected]> | https://gitlab.com/jimschubert/gitlab-ce/commits/9814c646ff872fb29a2390196c116a9a41c0f258/spec/support/controllers | CC-MAIN-2019-43 | refinedweb | 162 | 54.76 |
Groovy script that count the number of lines in a files in my process
Hi everyone ,
im trying to count the number of lines in a document in my process . My script is :
//////////////////////////////////////////////////////////////////////////////////
def lines = 0
doc_Integrateur_Enseignes_CSV.eachLine {
lines++
}
////////////////////////////////////////////////////////////////////////////////
doc_Integrateur_Enseignes_CSV is the name of the document in my process ( i upload that document in the task that is prior to the task that contains the script ).
when i try to run the process ,when i submit the form that contains the script i have a screen with error while submiting the form written on it. Im working with bonita 6.5. Any idea everyone ? Thank you
No answers yet.
What says the logs of your process ? | https://community.bonitasoft.com/questions-and-answers/groovy-script-count-number-lines-files-my-process | CC-MAIN-2018-51 | refinedweb | 116 | 82.34 |
High Schoolers Push Down Price of Near-Space Photography
timothy posted more than 3 years ago | from the can-I-get-a-student-discount dept.
."
Had sponsor (0)
Anonymous Coward | more than 3 years ago | (#35821890)
Note that they had a sponsor for the GPS portion.
Re:Had sponsor (2)
lazy genes (741633) | more than 3 years ago | (#35821926)
Space? (-1, Redundant)
fnj (64210) | more than 3 years ago | (#35821908)
Since when is 95,000 feet of altitude in "space?"
Re:Space? (1)
nedlohs (1335013) | more than 3 years ago | (#35821962)
No one called it space.
Re:Space? (2, Informative)
Anonymous Coward | more than 3 years ago | (#35821996)
Since when is 95,000 feet of altitude in "space?"
I believe they used the term "Near Space," which lies between 65,000 and 350,000 feet.
Re:Space? (1)
Anonymous Coward | more than 3 years ago | (#35822000)
In space, no one can read an altimeter.
Re:Space? (1)
ackthpt (218170) | more than 3 years ago | (#35822134)
Since when is 95,000 feet of altitude in "space?"
Not much water vapor up there, almost as good as space.
:)
Re:Space? (1)
pushing-robot (1037830) | more than 3 years ago | (#35822254)
Not much water vapor up there, almost as good as space.
:)
So would a hike through my nearest desert qualify as a spacewalk?
Re:Space? (0)
peragrin (659227) | more than 3 years ago | (#35822338)
considering that every moon astronaut trained in the desert it just might.
Re:Space? (4, Funny)
thedonger (1317951) | more than 3 years ago | (#35822428)
Re:Space? (1)
demonbug (309515) | more than 3 years ago | (#35823032)
Considering that the moon landing was STAGED in a desert it just might!
I suppose the moon would be considered a desert, so... I agree??
Re:Space? (1)
NekSnappa (803141) | more than 3 years ago | (#35823112)
Re:Space? (1)
ThanatosST (1896176) | more than 3 years ago | (#35825860)
Re:Space? (1)
Anonymous Coward | more than 3 years ago | (#35822192)
Re:Space? (1)
mangu (126918) | more than 3 years ago | (#35824366)
Since when is 95,000 feet of altitude in "space?"
Considering how uncrowded it's up there, "space" seems like a very good name for it.
Street View (1)
MrQuacker (1938262) | more than 3 years ago | (#35821964)
Push down the price!? (0)
Anonymous Coward | more than 3 years ago | (#35821972)
The price of what? Was there a market? Buyers? I thought google maps sat view was free??? Is there any straw you Space Nutters won't grasp at to pretend space is some kind of exciting marketing opportunity?
Re:Push down the price!? (1)
ae1294 (1547521) | more than 3 years ago | (#35821988)
The price of what? Was there a market? Buyers? I thought google maps sat view was free??? Is there any straw you Space Nutters won't grasp at to pretend space is some kind of exciting marketing opportunity?
We just really like straw... U MAD?
Re:Push down the price!? (-1)
Anonymous Coward | more than 3 years ago | (#35822016)
Would you prefer we study the worms living in your bunghole?
Open wide, baby!
Re:Push down the price!? (0)
Anonymous Coward | more than 3 years ago | (#35822794)
You might also check into that whole "anal reference" thing. I hear you guys can get married now?
Slashdotted immediately :( (0)
Anonymous Coward | more than 3 years ago | (#35821976)
Slashdotted immediately
:(
Boom chicka chicka "Server is down", boom chicka.. (2)
Ced_Ex (789138) | more than 3 years ago | (#35821980)
I saw the site for a second... and boom... server goes down.
Tis' better to have looked and lost than to have never looked before.
Re:Boom chicka chicka "Server is down", boom chick (1)
Floodge (2040570) | more than 3 years ago | (#35822098)
Re:Boom chicka chicka "Server is down", boom chick (1)
ArmchairGeneral (1244800) | more than 3 years ago | (#35822278)
Re:Boom chicka chicka "Server is down", boom chick (1)
Squeeonline (1323439) | more than 3 years ago | (#35822918)
I think Slashdot is responsible for bringing down more websites than Anonymous!
Next time there's a raid, it should be posted here. Preferably under the guise of something that would interest
/.'ers
Re:Boom chicka chicka "Server is down", boom chick (1)
syousef (465911) | more than 3 years ago | (#35822518).
Re:Boom chicka chicka "Server is down", boom chick (0)
Anonymous Coward | more than 3 years ago | (#35826056)
I use l8tr [l8tr.org] , a free monitoring service for slashdotted pages, created by a slashdotter.
I pasted the URL of the
/. link, waiting until the website comes back up.
Brooklyn Space Program (0)
Anonymous Coward | more than 3 years ago | (#35822076)
I like the Brooklyn Space Program [youtube.com]
DHS Will Be Dropping By (2, Insightful)
Lawrence_Bird (67278) | more than 3 years ago | (#35822096)
Re:DHS Will Be Dropping By (1)
royler (1270778) | more than 3 years ago | (#35822230)
Re:DHS Will Be Dropping By (0)
Anonymous Coward | more than 3 years ago | (#35822258)
"Welcome to the Hoover Dam. I hope you have a good time on this dam tour. Please take all the dam pictures you would like."
Re:DHS Will Be Dropping By (0)
Anonymous Coward | more than 3 years ago | (#35822384)
This is incorrect. I was there a few weeks ago, and no such requirement was imposed, or even talked about. The new fancy bypass bridge is a great place to take pictures and view the dam.
Now we know what hit the southwest flight (1)
goombah99 (560566) | more than 3 years ago | (#35822740)
Perhaps that hole in the southwest plane was not so spontaneous
Re:DHS Will Be Dropping By (0)
c6gunner (950153) | more than 3 years ago | (#35823328):DHS Will Be Dropping By (0)
Anonymous Coward | more than 3 years ago | (#35823656)
It's a joke you half-wit
Re:DHS Will Be Dropping By (1)
codegen (103601) | more than 3 years ago | (#35825092)
Like DIY astro cams (0)
ackthpt (218170) | more than 3 years ago | (#35822106)?)
Are these hazardous to airplanes? (0)
Anonymous Coward | more than 3 years ago | (#35822114)
I'm going to put on my buzz kill hat and say that it's only a matter of time before one of these contraptions is going to get sucked into a jet engine or foul a propellor.
Re:Are these hazardous to airplanes? (2)
plover (150551) | more than 3 years ago | (#35825466) limited time periods of ascent and descent. At 95,000 feet, there is no traffic of any kind except for those that bring their own oxidants with them (rockets.) And when you think about it, airspace is really really big, so the chances of a mid-air collision are vanishingly small. When you say "a matter of time", you might be talking thousands of years.
As far as a jet engine vs. this contraption, well, given that it's being lifted by a balloon less than a meter in diameter, it's probably made of the lightest mass plastic components possible, and would have a pretty small chance of causing damage to an engine. And consider the worst case, where the battery gets sucked into the engine and explodes. In the middle of a screaming combustion chamber. Designed to burn gallons of Jet-A fuel every second. It's probably not going to make too much of an impact there.
Link to Vimeo (4, Informative)
Mentally_Overclocked (311288) | more than 3 years ago | (#35822118) [vimeo.com] This has the video of the images taken.
Re:Link to Vimeo (2)
Floodge (2040570) | more than 3 years ago | (#35822162)
Re:Link to Vimeo (0)
Anonymous Coward | more than 3 years ago | (#35823046)
With all those repeat images of the same items, from different angles, it would be neat to do a couple of projects:
(1) Separate the images out into 8 movies, possibly all running side by side on the same screen
or
(2) Do automatic calculations to generate 3-d images of all these things. The process would be as follows: (a) subtract one image from the next, and take a 2D- FFT of different areas to see how much the images slide. The FFT will have a max frequency at the sliding length. Do the same again, with each image adjusted by the theoretical slide, to refine the model and to identify the motion of individual points. Track the points through several images. Then estimate the angle adjustment, to yield a relative distance and direction for each point. The relative distance can then be quantified by identifying a known quantity (such as the length of a Toyota Prius). Once you have a 3-d image, then set up the launch on a 3-d viewer.
Re:Link to Vimeo (0)
Anonymous Coward | more than 3 years ago | (#35826060)
I just had a seizure watching that. 1895 images in 5 minutes? Whose idea was that?
Time-lapse video (0)
Anonymous Coward | more than 3 years ago | (#35822138)
Before the server went down, I got to a time-lapse video they made from the pictures taken: [vimeo.com]
Something cool to watch if the server remains down...
anyone got a mirror? (1)
frovingslosh (582462) | more than 3 years ago | (#35822176)
Re:anyone got a mirror? (0)
Anonymous Coward | more than 3 years ago | (#35823452)
built from used and recycled components
In other words, they were given a bunch of stuff as donations because they're a school, making the $60 total meaningless for anyone else who wants to do this. It's $60 if you already have the other (total - $60) parts.
USD 75, not 60 (2)
RemyBR (1158435) | more than 3 years ago | (#35822214)
Their website () says:
"Equipment
We innovated upon and continued the trend of low-cost flight platforms, building our craft entirely from off the shelf components for close to 75 dollars."
Also, they say they had sponsorship for the GPS unit and Helium.
Re:USD 75, not 60 (1)
Achra (846023) | more than 3 years ago | (#35822358)
Re:USD 75, not 60 (0)
Anonymous Coward | more than 3 years ago | (#35823310)
Another thing that doesn't make sense is the camera used. They don't state which camera they use but they do state they use the Ultra Intervalometer script for taking pictures. The cheapest camera I was able to find that will run this script is well over 150 dollars used.
Basically because they are a High School they got a lot of great people to donate stuff for the cause. That's great and so yeah their out of pocket cost was only around 60 - 75 dollars but in true slashdot fashion it is misleading to state that was the cost of the mission. Additionally I saw no indication on their equipment page that they used any used or recycled matarial other than a "soft cooler" from the lost and found.
To say "but price is a great frontier to explore" is just wrong. Their set-ups true cost is much more than many others that have been launched. Get real.
Re:USD 75, not 60 (1)
badran (973386) | more than 3 years ago | (#35825996)
They could have used a cheap or used Cannon with the chdk.
Re:USD 75, not 60 (1)
societyofrobots (1396043) | more than 3 years ago | (#35824726):USD 75, not 60 (1)
Floodge (2040570) | more than 3 years ago | (#35825182)
Re:USD 75, not 60 (0)
Anonymous Coward | more than 3 years ago | (#35827752)
Bingo, that's the first thing I noticed too. It's hardly doing it at lower cost if somebody gave you an expensive GPS unit and you don't figure it's value in to the cost of the project.. The Project Icarus guys used things they had lying around, but factored their value into the cost of the project. This is a more honest approach.
My assumption is that their SpotGPS unit is MORE expensive (not less) than the cellphone used in project icarus. The only real innovation here is that they used a soft-cooler instead of a styrofoam.. and also they said that their parachute failed and the equipment was OK anyways.
Well, what they said is that they got a SpotGPS Tracker, and that the GPS beacon in the phone didn't work out for them. They still used the phone, but I'm not completely sure on all the details of their project because I keep getting this:
Website you were trying to visit was disabled for 5 minutes, because it received over 20% of total server requests.
It means that this website was using over 20% of processor resources, which is above allowed limit.
Website was temporary disabled to protect the server from overloading.
Please try again in 5 minutes.
I'm not sure which exact unit they used for GPS, but the cheapest SPOT GPS I found by following their page's link was just under $100 (USD). So we just blew the budget they listed on completing the minimum payload- their other suggestion was to go with a satellite phone.
They don't say how much they paid for the 8gb memory card, but figure you can pick one up for $5-10.
The place they linked to for helium says just under $500 for just shy of 300 cubic feet, or if you're renting/refilling one of their tanks a little over $200.
They said they used a 24" parachute, but I didn't see where they listed the cost of the balloon or how much helium they actually used.
Figure the cooler they nicked from the Lost & Found at maybe five bucks, $20 if you get fancy but it looks pretty cheap to me.
So I'd say to replicate their project done a little better, a one-time cost of $100 for the GPS Tracking gear out of the gate. Figure maybe $100 to $300 for the camera and comm systems, depending on how fancy you want to get. I'd probably budget another $100 to $300 for a decent cooler, parachute and rigging, and balloon. Figure the lithium batteries will cost you say $20 for rechargables, or recurring at $5 to $10 for a pack of 10 disposables. The helium will be cheapest if you spend another $100 or so for your own tank and valve, and just get it refilled.
So altogether I think you could do a handful of launches for around $1,000 (US). That will of course fluctuate a lot depending on how much you're willing to hoof it or bike, etc. to save on fuel costs.
BUT if you skimped and scrounged like these kids did, going to pawn shops and garage sales, you could probably pull off a few launches for under $200. If you found a way to cheapskate the helium, you could easily do it for under $100.
Nothing new (1, Troll)
kuzb (724081) | more than 3 years ago | (#35822270)
This is not new or exciting. This gets done a few times every year by random people, I'd hardly call it news.
Re:Nothing new (1)
Leebert (1694) | more than 3 years ago | (#35822314)
This is not new or exciting.
Yeah, I would have expected the editor to have said something like: "Near-space photography via balloon isn't quite new any more"
The point was how cheap this one was done: That the price point is now down to 60 bucks.
Re:Nothing new (0)
Anonymous Coward | more than 3 years ago | (#35822376)
I agree, I've seen too many of these to care, what I want to know is there a way to stop the balloon from expanding at a set point and not burst.
Once that's done stop the gas escaping!
Cheap low orbit (cannon) satelite..
Jon.
at least 3-4 slashdot stories about this recently (1)
peter303 (12292) | more than 3 years ago | (#35822850)
Stop stomping on the sprouts. (4, Interesting)
jeko (179919) | more than 3 years ago | (#35823350):Stop stomping on the sprouts. (1)
t2t10 (1909766) | more than 3 years ago | (#35823516)
US science literacy is still quite high in international comparisons: [nationmaster.com]
The US also is among the top for money spent per secondary student: [nationmaster.com]
Should the US improve? Of course. But ideologically motivated diatribes like yours aren't helping. Take your own advice: improve your literacy and then start arguing with facts instead of fear mongering.
Break put the Champagne! We're #14! (1)
jeko (179919) | more than 3 years ago | (#35823948)
Vulcan-like reasoning in your posts, but your problem is that your cites don't say what you'd like them to, and you don't have any experience of your own to draw from yet. You think education is doing OK in this country because some book or website tells you it is. I think things are falling apart because I've watched it with my own eyes, from both sides of the lectern. Emotional diatribes? Absolutely. I've watched our kids go from aspiring to be number one to being proud of being number 14. I'm ready to start chugging hemlock at this point.
When I was young, we were thinking "Mars, then the stars." From your other posts, your hopes and dreams are apparently to be left alone with variations of "Mine! My Precious!"
The reason I hammer away at you is that you break my heart, and I'm terrified of the timid, miserly, meager, threadbare, hopeless possible future you represent.
Re:Break put the Champagne! We're #14! (1)
t2t10 (1909766) | more than 3 years ago | (#35825024), and a registered Democrat, although people like you make me ashamed.
And when you were young, per-capita education and health care spending in the US was a fraction of what it is today (in constant dollars), so lack of spending is not what killed those dreams.
And the reason I hammer away at you is that I am terrified of the timid, miserly, meager, threadbare, hopeless possible future you represent. You are the left-wing counterpart to the nutty worshippers of Ayn Rand and the Christian right. The FUD you people spread between you is what is causing people to have so little faith in our future. You sabotage reasonable political debate with your demagoguery and prejudices.
My personal experience agrees with the statistics: education and health care both are quite good in the US. If your kids and your students are disillusioned and fearful, it's because of the way you raised them and taught them.
Oh, well, if we're beating Luxembourg... (2)
jeko (179919) | more than 3 years ago | (#35825594)
.. historically recent wars and occupations, and you think we're doing OK? The UK is number four. We're ten spots down from that, despite the fact the we have orders of magnitude more resources to work with.
You're OK with this? Ask me how I know you don't have any kids.
And when you were young, per-capita education and health care spending in the US was a fraction of what it is today (in constant dollars), so lack of spending is not what killed those dreams.
Simply not true. When I was a boy, we were in the middle of the Space Race. Education was almost getting properly funded. Teachers weren't taking part-time jobs to get by. Textbooks were not considered a rare and precious resource. Field trips did not spur panicked begging for the parents to chip in. Schools didn't whore themselves out to McDonalds and Burger King hoping to get a few bucks.
This is how I know you haven't spent any time near a public classroom lately. You know what parents buy for schools these days? Toilet paper. Copy paper. Pencils. My school district just took up a collection to buy gas for the school buses, and I'm in a wealthier school district. The large amount of money getting collected is not reaching the classroom, and if you don't know that, then you just don't know what you're talking about. I live in a school district that includes literally million-dollar homes and our teachers dress in cast-offs from Goodwill and drive 20-year-old cars.
I'm not even going to worry about refuting this because anyone who's a parent these days knows. Every scientist I know or ever met is either livid or in despair about the state of science education in public school today -- and yeah, I'm very comfortable making that statement on Slashdot. Have you even heard about what's going on with the Texas State Board of Education? Any working scientists who wanna jump in with t2t10 and talk about what a great job we're doing teaching science in the US, by all means speak up.
What exactly about your field do you feel our public schools are doing a wonderful job of explaining?
My personal experience agrees with the statistics
The statistics? We've talked about this before. The statistics are that we're getting beaten by Cuba in healthcare and Ireland in education. We're getting our butts handed to us by small island nations with few natural resources. You're bragging that you can place middle of the pack in the Girl Scout softball league.
You sabotage reasonable political debate
Are you even watching the news? I supported Reagan. The first time. In 1980, David Stockman wasn't a raging lunatic when he argued the Laffer Curve and that lower taxes would spur growth which would yield greater overall tax revenue. In 2010, even Stockman has recanted. We live in a world where Massey Energy can kill dozens of miners with impunity, where BP can destroy the Gulf of Mexico, hide it, and still post profits in the Billions in the same quarter. Reagan could almost be reasoned with. The same is not true of Sarah Palin and Donald Trump. There is no more "reasonable political debate." The situation is not in doubt, not in 2011. All the tired old ideas, that we can reach Nirvana by cutting taxes for billionaires and bleeding the middle class dry while telling the poor to simply die and decrease the surplus population, that nonsense was empirically disproven decades ago.
education and health care both are quite good in the US.
My, what a sheltered life you must have led.
And the reason I hammer away at you is that I am terrified of the timid, miserly, meager, threadbare, hopeless possible future you represent.
Really, if it's not too much trouble, could you come up with your own lines?
OK, probably not. I understand. Feel free to keep quoting people who have actually been there and know what they're talking about.
Re:Oh, well, if we're beating Luxembourg... (1)
t2t10 (1909766) | more than 3 years ago | (#35829484) true; go check the numbers instead of lying through your teeth.
So, your school district has a good tax base. If your students aren't getting the education they need, your school district is wasting the money on something else. Become active in local politics to fix this or stop complaining.
And you have the nerve to complain about insufficient funding for anything, and about people with billion dollar bonuses? Reagan started this with his Reagonomics, Trickle Down Economics, and ill-advised military build-up and interventionism. You want to know why the US isn't number one in so many areas? Look to Reagan and then look in the mirror.
You have a good health plan and you live in a school district with a good tax base. If you can't get your medical bills reimbursed and your kids can't get a good education, you only have yourself to blame for it. Let me help you out a little: sign up for an HMO or PPO, become active in local politics, and put 20% of your income in a savings account every month.
The US should indeed become number one on health and education in the world--we spend enough money on it. And to get there, we need to keep people like you from sabotaging that goal with your selfishness, your distortions, and your negativity.
Re:Break put the Champagne! We're #14! (0)
Anonymous Coward | more than 3 years ago | (#35825672)
Might it be too forward to suggest you both have valid viewpoints? There are places in this country where the public schools are quite well funded, and provide a lot of kids an excellent education. There are also places where the public schools are frighteningly understaffed and underfunded, and no parent with the means would voluntarily enroll their own children in them. There are places where the schools are run by rational school boards, and others where various anti-science mongers try to force a curriculum of misleading nonsense. There are districts where discipline is competently applied, and districts where it's completely irrational.
The schools we provide our children are like anything else: we generally get the quality we pay for, although strongly urban schools often have some of the lowest scores per student dollar spent. I think this reflects on the importance of the extramural environment, including such factors as inattentive parenting, rough neighborhoods, gangs, poverty, etc.
Finally, the averages you've both been arguing about are just that: averages. What may be more meaningful than arguing about #14 would be discussing the trend of the U.S. students: are they scoring better or worse today than they were 5 years ago? 10 years ago? 20 years ago? Then discover why, and find out what's changed. If being #14 means we dropped 10 places in the international ranking over the last 20 years, is it because our scores went down, or is it because South Korea's scores went up so much faster than ours? And what's the basis for the test? Are Koreans really better educated, or are they simply better prepared to take the standardized tests? Is this test the only meaningful measure of a South Korean student, or of a United States student?
Both of you are so busy waving your own point-supporting anecdotes at each other that neither of you can claim to have the "right answer" here.
Re:Stop stomping on the sprouts. (1)
Ol Olsoc (1175323) | more than 3 years ago | (#35827684) of bat-crap crazy to plain odd, to nerdy but not weird, and one normal sort. And of course the non-girlie girl cutie. There's room for all types here.
Balloons? That is the sort of science project that is magic for people. It's fun to put together and execute, and you can get a whole class involved, separating into teams, that plan payload, build systems, and work out tracking and recovery missions. Amateur radio operators often do this often in conjunction with the schools, working the telemetry and tracking. Kids can do "pongsat" experiments, coming up with experiments that fit in a small space, like a ping pong ball (think effects on seeds sent to that altitude, maybe an aerogel experiment. Lots of cool stuff to be done.
One odd thing is that many people don't believe that you can get permission to do this. There are rules and guidelines, but since they've been launching weather balloons for years now, it's pretty well worked out.
Who do I contact (1)
XxtraLarGe (551297) | more than 3 years ago | (#35822352)
fatal fri. upon us, & no changes planned (0)
Anonymous Coward | more than 3 years ago | (#35822368)
the church&state.gov remain as the only chosen ones.
this guys'
.gov.id.watch will probable explode too;
some spys like us?
I'm Surprsed (2)
Virtucon (127420) | more than 3 years ago | (#35822484):I'm Surprsed (1)
Kittenman (971447) | more than 3 years ago | (#35822912)
So, maybe unsolicited balloons are a concern.
Re:I'm Surprsed (1)
mattcasters (67972) | more than 3 years ago | (#35823128)
Balloons are a very unreliable vehicle for delivering bombs on account of the wind being rather unpredictable and blowing in different directions on different altitudes.
Unsolicited cars and trucks on the other hand are a concern!
Re:I'm Surprsed (1)
pubwvj (1045960) | more than 3 years ago | (#35823536)
Attach the balloon to a fishing line. Works great. Makes it retrievable and keeps it positioned.
Re:I'm Surprsed (1)
mattcasters (67972) | more than 3 years ago | (#35824050)
Yes of course. However that would arguably place the explosive payload in the wrong location... namely over your head.
"maybe unsolicited balloons are a concern" (2)
jeko (179919) | more than 3 years ago | (#35823650) drop a nuclear bomb, since that had happened twice.
My mind boggles at the level of paranoia it takes to go from "Hey, look a balloon" to "Maybe it's from the terrorists! Run away, run away!"
Did you avoid bunnies after watching "Monty Python and the Holy Grail" too?
The world is a dangerous place. There are sharks in the water. They have eaten people. But if that fear keep your toes dry on the sand, then I feel sorry for you. I can't imagine living in that much fear all the time.
Re:"maybe unsolicited balloons are a concern" (0)
Anonymous Coward | more than 3 years ago | (#35826322)
Did you avoid bunnies after watching "Monty Python and the Holy Grail" too?
Yes, but I'm not afraid of terrorists - in fact I joined the US Army just to get the chance to shoot some as their penance for bombing us. Big Narley teeth on the other hand I don't mess with.
Re:"maybe unsolicited balloons are a concern" (1)
loimprevisto (910035) | more than 3 years ago | (#35826496). :-) (1)
jeko (179919) | more than 3 years ago | (#35823388)
Hmm, Virtucon has a low user id, probably old enough to remember that it takes 99 ballons to scramble the jets. [wikipedia.org]
:-)
Re:I'm Surprsed (1)
camperdave (969942) | more than 3 years ago | (#35823398)? (1)
randomErr (172078) | more than 3 years ago | (#35822602)
My work proxy says the page has a virus on the page. Any one not able to access the webpage?
Re:Virus on the page? (0)
Anonymous Coward | more than 3 years ago | (#35825170)
yeah, the virus is part of the conspiracy to relate relevant technical success relating to a science/engineering project to the uninformed masses.
DISCLAIMER: This statement was not meant to be factual
donate the imagery to openstreetmap (1)
richlv (778496) | more than 3 years ago | (#35822606)
hey ! and to continue the cycle of good intentions, donate the imagery to osm.org to improve maps
;)
I have a theory (1)
blair1q (305137) | more than 3 years ago | (#35822652)
Kids in high school think their parents are stupid.
stability (3, Interesting)
georgesdev (1987622) | more than 3 years ago | (#35822886)
I mean near space cheap photography has been done many times.
What's really missing is something to get a stable shooting of the images
right now, it makes me wanna puke!!! Then the animation would really be cool!
Re:stability (0)
Anonymous Coward | more than 3 years ago | (#35823954)
Honestly?
One or two gyroscopes to provide image stabilization would not cost more than $60 in parts and labor . I think that the key factor is weight. Adding in gyroscopes, another set of motors, micro controller, extra battery capacity, would potentially limit the maximum height that the balloons could reach.
But, then again, they could always make bigger balloons. Any thoughts?
re:High Schoolers Push Down Price of Near-Space Ph (1)
JohnVanVliet (945577) | more than 3 years ago | (#35823738)
well it looks like the site
/.'ed
h??p://
has been
the servers are overloaded
the
/. effect in action
very crazy (0)
Anonymous Coward | more than 3 years ago | (#35826622)
High School students very crazy.BVLCARI Watchs,A
"Near" space ? (1)
RockDoctor (15477) | more than 3 years ago | (#35826762)
OK, I'm a pedant. I'll get the phone book so you can call someone who cares.
What is a High Schooler? (1)
AP31R0N (723649) | more than 3 years ago | (#35827102)
Is it someone who 'high schools'?
Maybe subby meant high school students.
Good thing it didn't drift west (1)
return 42 (459012) | more than 3 years ago | (#35830140)
It might have photographed Barbra Streisand's house. | http://beta.slashdot.org/story/150374 | CC-MAIN-2014-42 | refinedweb | 5,390 | 79.6 |
Hosting a HTTPS static website on Amazon S3 w/ CloudFront and Route 53
If your deadline is tight, you have no experience with AWS, and your Dev Ops was hit by a bus, then this article is for you!
I need to quickly configure a static website ready for production with minimal hassle and tinkering with managing a server. I’ve decided to use a combination of Amazon S3 and CloudFront due to the ease of configuration, or so I’ve been told.
After following a few guides, I was able to get it quickly up and running. However there were a few gotcha’s I’ve faced along the way. So I decided to write this guide for myself in the intent you have no experience with AWS and to hopefully be clear of any confusions I’ve faced for what is a simple process with alot of moving parts.
There are no requirements besides:
- An Amazon AWS account. It’s free.
- A domain name you’ve purchased from Amazon Route 53. If you own the domain from somewhere else (i.e crazydomains), you can transfer it to Route 53 for a small fee.
- AWS CLI installed on your terminal.
We’ll be covering Amazon S3, Amazon Route 54 and Amazon CloudFront with SSL configuration. How it works is that when a user visits a domain and makes a request, Route53 will redirect that request to our CloudFront distribution which is a cached copied of our website that is originally stored in an Amazon S3 bucket.
If none of that make sense, I’ll explain how each service works more in detail.
Note: We’ll be using example.com as our custom domain name for demonstration’s sake.
Amazon S3
Amazon S3 is a web storage service which provides low latency data storage infrastructure at very low costs for developers and a cost & time effective solution for cloud storage.
It also integrates nicely with Amazon CloudFront, a service that’ll provide low latency access to our site around the world and scalability for free (to an extent).
Let’s start with creating a new bucket. In the AWS Dashboard, goto S3 ( in Storage services) and click on Create Bucket.
We will be prompted to enter our bucket name and region, in which you can just pick the closest region to you. We’ll leave the properties and permission levels to default as it is for now.
Keep in mind that bucket name has a global namespace, and you may have heard from somewhere that your bucket name must be the same as your domain name in order for static hosting to work. The official AWS documentation says.
This means you will be unable to link the custom domain with your Amazon S3 bucket if the name is already taken.
This is however a non-issue as we will integrate it with Amazon CloudFront, which can be configured to use an S3 bucket of any name. With CloudFront, users that visit our domain will directly fetch data from the CloudFront distribution which in turn caches contents from our S3 bucket.
After the bucket is created, let’s sync our website through the AWS CLI where /build is your folder containing the root index.html file and example.com is the name of your S3 bucket.
aws s3 sync build/ s3://example.com
We will then need to configure our bucket so it is publicly available for our CloudFront to access. We can do this pasting the following policy statement into the Bucket Policy.
{
"Version": “2012-10-17",
"Statement": [
{
"Sid": "IPAllow",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::example.com/*"
}]
}
What this policy does is allow (“Effect”: “Allow”) everyone (“Principal”: “*”) to get all objects from our S3 bucket (“Action”: “s3:GetObject”) named example.com (“Resource”: “arn:aws:s3:::example.com/*”).
We’ll also need to enable static website hosting under properties.
Take note of the endpoint, which we’ll need to configure in CloudFront. If your S3 bucket is configured properly, users can now visit your endpoint and view your website over the web.
We’re done with S3 configuration!
Amazon CloudFront
Amazon CloudFront act as our content delivery network for our Amazon S3 bucket. It will cache contents around edges of datacenters around the globe and acts as a global load balancer, ensuring users are accessing our website in datacenters most closest to them with the lowest latency. It essentially makes our website scalable and fast in most places worldwide at virtually no cost (to an extent).
In the AWS Dashboard, Goto CloudFront which is located under Network & Content Deliver. We will create a new CloudFront distribution and select on Get Started on Web Distribution.
We will prompted to input our CloudFront Distribution settings. For now we only want to input our Origin Domain Name and our Alternate Domain Name.
Origin Domain Name: Here we will link our CloudFront Distribution with our Amazon S3 bucket.
Double clicking the textfield will prepopulate it with Amazon S3 links, however it may be incorrectly populated. You want to use the actual endpoint shown in Static Website Hosting of your S3 bucket but without the http prefix i.e xxxxx.s3-website-eu-west-1.amazonaws.com.
Alternate Domain Names (CNAMEs): Input the custom domain name that you own i.e example.com
Click on Create Distribution to finish the creation. It’ll take a couple of mins before it’s ready to be used.
Once completed, visit your s3 Static Website Endpoint to check if data is being pulled through CloudFront instead of Amazon S3. You can verify by checking Usage under Reports and Analytics to see if any data is there once you’ve visited the website.
We are done with most of the configuration of CloudFront as the default settings suit 99% of our use cases. We’ll return to this for the final configuration with SSL.
Amazon Route 53 configuration
Amazon Route 53 is used to register domain names and route internet traffic of our domain name to our resources which would be either our CloudFront or our S3 bucket. If you bought your domain from another site, you can transfer domain ownership to Route 53 for a small fee.
As Amazon CloudFront is already configured to cache contents from our S3 Bucket, we only need to configure our domain on Route 53 to point to our CloudFront web distribution.
If you haven’t already, create a new Hosted Zone in Route 53 with your domain name. Leave the default to Public Hosted Zone.
Once created, your domain in Hosted Zones and click on Create Record Set. We will be creating an an Address record (A — IPv4 address) type record set with an Alias set to yes.
For your name, leave it blank as we are creating the record set for the main domain name. Input your CloudFront distribution domain name ( it should look like xxxxxxxx.cloudfront.net ) into your Alias Target field and leave the the default values of Routing Policy and Evaluate Target Health to default values of Simple and No. The Alias Hosted Zone ID’s should autofill if configured correctly.
After creating the Creating Record set, we are done with Amazon Route 53 configuration.
SSL Certificate Configuration
We’ll need to make our website SSL certified, which ensures data between your browser and the server is encrypted so intruders cannot modify or read the contents of the packets sent from your computer to the server. It’s also a requirement for modern browser features such as web push notifications. Search engines will also penalise your SEO rankings if it isn’t HTTPS certified.
This requires a certificate which can be produced for free using AWS Certification Manager. This initially didn’t work for me out of the box, so I ended up configuring it with a paid certificate which was quite cumbersome to prepare and import into AWS. The free certificate did work after revisiting it, I believe you have to wait for a while in order for the SSL certificate to work.
If you must configure it with a paid certificate that you’ve bought, I saved my notes on a gist. But I can’t think of any single reason why you would (let me know if you disagree)and I suggest that you don’t.
To request a certificate , just click on the ‘Request or Import a certificate with ASM’ within the Edit Distribution in CloudFront of Domain and it will take you directly to certificate creation.
Hereon the process of Requesting a Certificate is straightforward. Just add all the domains you want to attach to the certificate, including any subdomains such as if you wish.
Then click on Review and Request and it’s just a matter of verifying you own the domain by clicking on the email that sent’s out to the domain name owner.
After that’s done, just select verified SSL in your CloudFront distribution.
Once set, you should be able to visit your domain using the https prefix i.e and your site should appear. It’ll appear with a secure icon on Google Chrome.
Other options
You have done the bulk of the work in making your site secure. Perhaps we want to tweak the behaviour such that such as to redirect all HTTP requests HTTP to HTTPS.
We can do that by changing the Viewer Protocol Policy to Redirect HTTP to HTTPS in the Origin Domain Settings.
Now every-time we visit the domain on normal HTTP, we’ll be redirected to the HTTPS version of the website.
There’s other neat things we can do to such as redirect subdomain URL’s into our main domain. In our example we want to redirect all www domains into our main SSL certified domain name.
The quickest solution is to create an S3 bucket with the subdomain as the name, and set the Static website hosting properties of the website.
I haven’t figured out the alternative if the bucket name is already taken along with this other gaps.
In Summary
There’s still quite a few gaps that I need to figure out. The need for this blog was initially stemmed from getting lost with the process particularly configuring an unbundled paid certificate instead of the much easier path of using Amazon’s ACM.
But if you got this far, you’ve probably done enough to satisfy your project and buy yourself time to figure out the kinks.
If you were stuck, or have any questions, comment below! | https://medium.com/@matthewmanuel/hosting-a-https-static-website-on-amazon-s3-w-cloudfront-and-route-53-f347a16b6a91 | CC-MAIN-2019-22 | refinedweb | 1,766 | 62.27 |
Learn about the facade pattern, why you may or may not want to use one with NgRx, and how to create a facade.. (You can find the sample code in this repository.)
What are facades?
In code, the term “facade” refers to the facade pattern, which is a structural design pattern from the famous book Design Patterns (usually called the “Gang of Four” in reference to the authors). Just like the front of a building hides what’s inside to passersby, a facade in code hides the complexity of underlying services by only exposing certain methods. You’ll often hear this hiding of complexity referred to as “adding a layer of abstraction.” An abstraction is a simpler and more general model of something that only provides what its consumer needs to know.
The facade pattern, and abstraction in general, is similar to the relationship between you and your car. You only need to know that when you turn the key and press the gas pedal, your car moves forward. You don’t need to care about the underlying mechanics of the engine or the science of combustion in order to go to the grocery store.
Here's how the facade pattern can be represened in a diagram:
You can see here how the clients don't know anything about the other services. They simply call
doSomething() and the facade handles working with the services behind the scenes.
Facades in NgRx
Recently, there has been a lot of discussion about whether or not to use a facade with NgRx to hide away bits like the store, actions, and selectors. This was sparked by an article by Thomas Burleson called NgRx + Facades: Better State Management. A facade is basically just an Angular service that handles any interaction with the store. When a component needs to dispatch an action or get the result of a selector, it would instead call the appropriate methods on the facade service.
This diagram illustrates the relationship between the component, the facade, and the rest of NgRx:
Let’s take a look at an example to understand this better. Here’s the
BooksPageComponent from the sample code for my NgRx Authentication Tutorial (I’ve hidden the code inside of the
Component decorator to make it easier to read):
// src/app/books/components/books-page.component.ts import { ChangeDetectionStrategy, Component, OnInit } from '@angular/core'; import { select, Store } from '@ngrx/store'; import { Observable } from 'rxjs'; import * as BooksPageActions from '../actions/books-page.actions'; import { Book } from '../models/book'; import * as fromBooks from '../reducers'; import { Logout } from '@app/auth/actions/auth.actions'; @Component({ /* ...hidden for readability */ }) export class BooksPageComponent implements OnInit { books$: Observable<Book[]>; constructor(private store: Store<fromBooks.State>) { this.books$ = store.pipe(select(fromBooks.getAllBooks)); } ngOnInit() { this.store.dispatch(new BooksPageActions.Load()); } logout() { this.store.dispatch(new Logout()); } }
This component imports the store, some actions, and some reducers. It dispatches the
Load action during
ngOnInit and uses a selector for the books in the constructor. It also dispatches an action when the user logs out.
If we were instead using a facade in the component, the class would look something like this (I’ll leave out the imports and decorator for the sake of brevity):
// src/app/books/components/books-page.component.ts // above remains the same except for the imports export class BooksPageComponent implements OnInit { books$: Observable<Book[]>; constructor(private booksFacade: BooksFacade) { this.books$ = this.booksFacade.allBooks$; } ngOnInit() { this.booksFacade.loadBooks(); } logout() { this.booksFacade.logout(); } }
We’ve replaced the selector with an observable on the
booksFacade service. We’ve also replaced both action dispatches with calls to methods on the
booksFacade service.
The facade would look like this (again leaving out the imports for the sake of brevity):
// imports above @Injectable() export class BooksFacade { allBooks$: Observable<Book[]>; constructor(private store: Store<fromBooks.State>) { this.allBooks$ = store.pipe(select(fromBooks.getAllBooks)); } getBooks() { this.store.dispatch(new BooksPageActions.Load()); } logout() { this.store.dispatch(new Logout()); } }
Looking at the component code in isolation, you’d have no idea that this Angular application is using NgRx for state management. So, is that a good thing or a bad thing? Let’s talk about some pros and cons to this approach.
The Case for Facades
Let’s first consider some pros of the facade pattern for NgRx.
Pro #1: Facades provide a better developer experience.
One of the biggest arguments for using the facade pattern is its boost to developer experience. NgRx often gets flack for requiring a lot of repetitive code for setup and for the difficulty of maintaining and scaling a lot of moving parts. Every feature requires changes to state, the store, actions, reducers, selectors, and effects. By adding a layer of abstraction through the facade, we decrease the need to directly interact with these pieces. For example, a new developer writing a new feature listing books could just inject the
BooksFacade and call
loadBooks() without needing to worry about learning the intricacies of the store.
A facade wraps the store up, and allows your component to only know about one thing, the store facade. It's clean. You only inject one thing. It becomes the interface for the components and the store. It feels much cleaner than with a store by itself.— Frosty (@aaronfrost) October 29, 2018
Pro #2: The facade pattern is easier to scale than plain NgRx.
The second argument for the facade pattern is how easy it is to scale. Let’s say, for example, we needed to develop seven new features for a large application. Several of those features would probably overlap in some of the ways they affect the state of the application, as well as in the data they consumed. We could cut out a lot of repetitive work by using facades:
- First, we'd determine the smallest number of state changes we need to make based on unique use cases.
- Next, we'd add them to the right places in the application-wide NgRx setup files (like changes to the application state or reducers).
- Finally, we'd appropriate methods and selectors to our facade.
The seven new features could then be added quickly while letting the facade worry about the underlying NgRx pieces.
"Using facades in NgRx can increase dev productivity and make apps easier to scale."
The Case Against Facades
This reduced friction in development and scaling really sound great, but is everything a bed of roses? Let’s take a look at the other side of the argument.
Con #1: Facades break the indirection of NgRx.
The first time I saw facades used with NgRx, I had an immediate response of, “Wait, we just spent all this time setting up NgRx actions, reducers, and effects, but now we’re hiding all of that away with a service?” It turns out that gut feeling I had is one of the main arguments against using facades.
While NgRx does get criticized for having a lot of moving parts, each of those parts has been designed to perform a specific function and communicate with other parts in a specific way. At its core, NgRx is like a messaging system. When a user clicks a “Load Books” button, the component sends a message (an action) that says, “Hey, load some books!” The effect hears this message, fetches the data from the server, and sends another message as action: “Books are loaded!” The reducer hears this message and updates the state of the application to “Books loaded.”
We call this indirection. Indirection is when part of your application is responsible for something, and the other pieces simply communicate with it through messaging. Neither the reducer nor the effects know about the button that was pressed. Likewise, the reducer doesn’t know anything about where the data came from or the address of the API endpoint.
When you use facades in NgRx, you circumvent this design (at least in practice). The facade now knows about dispatching actions and accessing selectors. At the same time, the developer working on the application no longer knows anything about this indirection. There’s simply now a new service to call.
The Redux pattern has a high code cost to achieve indirection. Turning around and removing the indirection with facades makes me wonder why you are paying the Redux cost in the first place. Why not something like Akita at that point?— Mike Ryan (@MikeRyanDev) October 29, 2018
Con #2: Facades can lead to reusing actions.
With NgRx’s indirection circumvented and hidden away from developers, it becomes very tempting to reuse actions. This is the second major disadvantage to the facade pattern.
Let’s say we’re working on our books application. We’ve got two places where a user can add a new book: 1) the list of books and 2) a book’s detail page. It would be tempting to add a method to our facade called
addBook() and use it to dispatch the same action in both of these instances (something like
[Books] Add Book).
However, that would be an example of poor action hygiene. When we come back to this code in a year or two because of a bug that’s cropped up, we won’t know when we’re debugging where
[Books] Add Book came from. Instead, we’d be better off following Mike Ryan’s advice in his ng-conf 2018 talk Good Action Hygiene. It’s better to use actions to capture events, not commands. In our example, our
booksReducer could simply have an additional case:
function booksReducer(state, action){ switch (action.type) { case '[Books List] Add Book': case '[Book Detail] Add Book': return [...state, action.book]; default: return state; } }
When writing actions, we always want to focus on clarity over brevity. Good actions are actions you can read after a year and tell where they are being dispatched.
When it comes to the facade pattern, we can mitigate this problem by creating a
dispatch method in our facade instead of abstracting away actions. In our example above, instead of having a generic
addBook() method, we’d call
facadeService.dispatch(new AddBookFromList()) or
facadeService.dispatch(new AddBookFromDetail()). We lose a little bit of abstraction by doing this, but it will save us headaches in the future by following best practices for action creation.
So, which is it?
Facades can greatly speed up development time in large NgRx apps, which keeps developers happy and makes scaling up a lot easier. On the other hand, the added layer of abstraction can defeat the purpose of NgRx’s indirection, causing confusion and poor action hygiene. So, which wins out?
All developers at some point in their career will learn that increased abstraction always comes at a price. Abstraction trades transparency for convenience. This may turn up in difficulty debugging or difficulty maintaining, but it will come up at some point. It’s always up to you and your team to determine whether that trade-off is worth it and how to deal with any downsides. Some folks in the Angular community argue that the benefits of the facade pattern outweigh the cost of increased abstraction, and some argue they don’t.
I believe the facade pattern certainly has a place in NgRx development, but I’d offer two caveats. If you’re going to use facades with NgRx:
- Make sure your developers understand the NgRx pattern, how its indirection works, and why you’re using a facade, and
- Promote good action hygiene by using a
dispatchmethod in your facade service instead of abstracting actions.
By teaching your teammates the NgRx pattern while also using a facade, you’ll save yourself some headaches when things start to break. By using a
dispatch method in your facade, you’ll mitigate the tendency to reuse actions at the expense of keeping your code readable and easy to debug.
"Even though using facades in NgRx can be really helpful, it's important to keep good action hygiene in mind and not reuse actions."
Implementing a Facade
Now that we know why we may or may not want to use a facade with NgRx, let’s create one. We’re going to add a facade to a simple books application and use the facade to simplify the component that lists the book collection. We’ll also use good action hygiene by using the
dispatch pattern described above.
Set Up the Application
To get started, we'll need to be sure Node and npm are installed. We’ll also need the Angular CLI:
npm install -g @angular/cli
The code for this tutorial is at the Auth0 Blog repository. Here are the commands we’ll need:
git clone cd ngrx-facades npm install git checkout 8e24360
Test the Application
We should now be able to run
ng serve, navigate to, and click “See my book collection.”
Add the Facade Service
Since a facade is just a service, we can generate our initial code with the Angular CLI:
ng generate service /books/services/books-facade --spec=false
We’ll now have a
services folder inside of our
books folder with a file called
books-facade.service.ts. Opening it, we’ll see the following:
// src/app/books/services/books-facade.service.ts import { Injectable } from '@angular/core'; @Injectable({ providedIn: 'root' }) export class BooksFacadeService { constructor() { } }
Note: You’re welcome to drop
Servicefrom the class name. Some people also remove
servicefrom the file name and move this file somewhere more related to state to prevent confusion with HTTP services. I’m leaving those details up to you and sticking with some simple defaults.
Let’s think about what we initially want our new facade to do for us. We’re going to use it in the
BooksPageComponent we discussed above. Let’s remind ourselves what that component class looks like:
// src/app/books/components/books-page.component.ts // Omitting Component decorator and imports for brevity. export class BooksPageComponent implements OnInit { books$: Observable<Book[]>; constructor(private store: Store<fromBooks.State>) { this.books$ = store.pipe(select(fromBooks.getAllBooks)); } ngOnInit() { this.store.dispatch(new BooksPageActions.Load()); } }
Our component does the following:
- Calls a selector for the books
- Dispatches a
Loadaction from the store during the
ngOnInitlifecycle hook.
Let’s implement replacements for these in our facade. We’ll be able to simply copy over quite a bit of this code with a few modifications.
We know that we need to inject the
Store in the constructor, which will take the
State exported from
src/app/books/reducers/books.ts just as it is in the
BooksPageComponent. We also know we’ll need a
dispatch method that takes an
Action (which we’ll need to import). Let’s update our facade accordingly by importing what we need and adding those things:
// src/app/books/services/books-facade.service.ts import { Injectable } from '@angular/core'; import { Store, Action } from '@ngrx/store'; import * as fromBooks from '../reducers'; @Injectable({ providedIn: 'root' }) export class BooksFacadeService { constructor(private store: Store<fromBooks.State>) { } dispatch(action: Action) { this.store.dispatch(action); } }
Notice that we’re using the same imports and the same injection of the store as in the component.
Finally, let’s initialize a new observable for the books and use a selector in the constructor. We’ll basically copy over what we’ve got in the component right now, but let’s change the name of the observable to
allBooks$ just to be clear. We’ll also need to import
Observable from
rxjs, add
select to our imports from
ngrx/store, and import the
Book model. The finished code will look like this:
// src/app/books/services/books-facade.service.ts import { Injectable } from '@angular/core'; import { Store, Action, select } from '@ngrx/store'; import { Observable } from 'rxjs'; import * as fromBooks from '../reducers'; import { Book } from '../models/book'; @Injectable({ providedIn: 'root' }) export class BooksFacadeService { allBooks$: Observable<Book[]>; constructor(private store: Store<fromBooks.State>) { this.allBooks$ = store.pipe(select(fromBooks.getAllBooks)); } dispatch(action: Action) { this.store.dispatch(action); } }
Our facade is done! Now let’s update our books page component.
Update the Books Page
The first thing we’ll do to update the
BooksPageComponent is replace the injection of the store with the
BooksFacadeService:
// src/app/books/components/books-page.component.ts // no changes to above imports import { BooksFacadeService } from '../services/books-facade.service'; @Component({ // hidden, no changes }) export class BooksPageComponent implements OnInit { books$: Observable<Book[]>; constructor(private booksFacade: BooksFacadeService) { this.books$ = store.pipe(select(fromBooks.getAllBooks)); } ngOnInit() { this.store.dispatch(new BooksPageActions.Load()); } }
Of course, we’ll now see red squiggles in our editor underneath the references to the store. We can fix those by replacing the selector with
booksFacade.allBooks$ and the other reference to
store inside of
ngOnInit with the
booksFacade. The rest will remain unchanged. Cleaning up the imports, the finished code will look like this (again, omitting the decorator since there are no changes):
import { ChangeDetectionStrategy, Component, OnInit } from '@angular/core'; import { Observable } from 'rxjs'; import * as BooksPageActions from '../actions/books-page.actions'; import { Book } from '../models/book'; import { BooksFacadeService } from '../services/books-facade.service'; @Component({ // hidden, no changes }) export class BooksPageComponent implements OnInit { books$: Observable<Book[]>; constructor(private booksFacade: BooksFacadeService) { this.books$ = booksFacade.allBooks$; } ngOnInit() { this.booksFacade.dispatch(new BooksPageActions.Load()); } }
And that’s it! We’re no longer using the store in this component. We should still be able to run
ng serve and look at the book list. To double-check that the facade is working, we can set a breakpoint on the
dispatch method in the service. It should trigger when the books load.
Because this is such a simple example, we didn’t get a ton of benefit to using the facade here. However, it’s easy to imagine how much of a lifesaver this would be if this component was using five selectors at a time and there were seven other components doing the same! You now know everything you need to scale this example up. Try adding more selectors or actions to the application!
Remember, you can access the finished sample code here.
Conclusion
The facade pattern can be extremely helpful when trying to build large Angular applications that use NgRx for state management. At the same time, it’s good to be aware of the pitfalls when using this approach. Increased abstraction can cause increased opacity if you’re not careful and can also lead to some bad habits when it comes to creating and using actions. You can avoid those pitfalls by keeping your team well-versed in the NgRx pattern, teaching new developers the reasons for using facades, and by using a
dispatch method in your facade instead of abstracting actions. Good luck and happy coding!: | https://auth0.com/blog/amp/ngrx-facades-pros-and-cons/ | CC-MAIN-2019-18 | refinedweb | 3,106 | 55.44 |
Unity Features
You can replace the default blue avatar with a personalized avatar using the Oculus Platform package. The base Avatar SDK OvrAvatar.cs class is already set up to load the avatar specifications of users, but we need to call Oculus Platform functions to request valid user IDs.
After getting a user ID, we can set the oculusUserID of the avatar accordingly. The timing is important, because this has to happen before the Start() function in OvrAvatar.cs gets called.
The example below shows one way of doing this. It defines a new class called PlatformManager. It extends our existing Getting Started sample. When run, it replaces the default blue avatar with the personalized avatar of the user logged on to Oculus Home.
- Import the Oculus Platform SDK Unity package into your Unity project.
- Specify valid App IDs for both the Oculus Avatars and Oculus Platform plugins:
- Click Oculus Avatars > Edit Configuration and paste your Oculus Rift App Id or Gear VR App Id into the field.
- Click Oculus Platform > Edit Settings and paste your Oculus Rift App Id or Gear VR app Id into the field.
- Create an empty game object named PlatformManager:
- Click GameObject > Create Empty.
- Rename the game object PlatformManager.
- Click Add Component, enter New Script in the search field, and then select New Script.
- Name the new script PlatformManager and set Language to C Sharp.
- Copy and save the following text as Assets\PlatformManager.cs.
using UnityEngine; using Oculus.Avatar; using Oculus.Platform; using Oculus.Platform.Models; using System.Collections; public class PlatformManager : MonoBehaviour { public OvrAvatar myAvatar;) { myAvatar.oculusUserID = message.Data.ID; } } }
- In the Unity Editor, select PlatformManager from the Hierarchy. The My Avatar field appears in the Inspector.
- Drag LocalAvatar from the Hierarchy to the My Avatar field.
Handling Multiple Personalized Avatars
If you have a multi-user scene where each avatar has different personalizations, you probably already have the user IDs of all the users in your scene because you had to retrieve that data to invite them in the first place. Set the oculusUserID for each user 's avatar accordingly.
If your scene contains multiple avatars of the same person, you can iterate through all the avatar objects in the scene to change all their oculusUserID values. For example, the LocalAvatar and RemoteLoopback sample scenes both contain two avatars of the same player.
Here is an example of how to modify the callback of our PlatformManager class to personalize the avatars in the sample scenes:
using UnityEngine; using Oculus.Avatar; using Oculus.Platform; using Oculus.Platform.Models; using System.Collections; public class PlatformManager : MonoBehaviour {) { OvrAvatar[] avatars = FindObjectsOfType(typeof(OvrAvatar)) as OvrAvatar[]; foreach (OvrAvatar avatar in avatars) { avatar.oculusUserID = message.Data.ID; } } } }
Avatar Prefabs
The Avatar Unity package contains two prefabs for Avatars: LocalAvatar and RemoteAvatar. They are located in OvrAvatar >Content > PreFabs. The difference between LocalAvatar and RemoteAvatar is in the driver, the control mechanism behind avatar movements.
The LocalAvatar driver is the OvrAvatarDriver script which derives avatar movement from the logged in user's Touch and HMD or.
The RemoteAvatar driver is the OvrAvatarRemoteDriver script which gets its avatar movement from the packet recording and playback system.
Sample Scenes
There are four sample scenes in the Avatar Unity package:
Controllers
Demonstrates how first-person avatars can be used to enhance the sense of presence for Touch users.
GripPoses
A helper scene for creating custom grip poses. See Custom Touch Grip Poses.
LocalAvatar
Demonstrates the capabilities of both first-person and third-person avatars. Does not yet include microphone voice visualization or loading an Avatar Specification using Oculus Platform.
RemoteLoopback
Demonstrates the avatar packet recording and playback system. See Recording and Playing Back Avatar Pose Updates.
Reducing Draw Calls with the Combine Meshes Option
Each avatar in your scene requires 11 draw calls per eye per frame (22 total). The Combine Meshes option reduces this to 3 draw calls per eye (6 total) by combining all the mesh parts into a single mesh. This is an important performance gain for Gear VR as most apps typically need to stay within a draw call budget of 50 to 100 draw calls per frame. Without this option, just having 4 avatars in your scene would use most or all of that budget.
You should almost always select this option when using avatars. The only drawback to using this option is that you are no longer able to access mesh parts individually, but that is a rare use case.
Custom Touch Grip Poses
The GripPoses sample lets you change the hand poses by rotating the finger joints until you get the pose you want. You can then save these finger joint positions as a Unity prefab that you can load at a later time.
In this example, we will pose the left hand to make it look like a scissors or bunny rabbit gesture.
Creating the left hand pose:
- Open the Samples > GripPoses > GripPoses scene.
- Click Play.
- Press E to select the Rotate transform tool.
In the Hierarchy window, expand LocalAvatar > hand_left > LeftHandPoseEditHelp > hands_l_hand_world > hands:b_l_hand.
Locate all the joints of the fingers you want to adjust. Joint 0 is closest to the palm, subsequent joints are towards the finger tip. To adjust the pinky finger joints for example, expand hands:b_l_pinky0 > hands:b_l_pinky1 > hands:b_l_pinky2 > hands:b_l_pinky3.
In the Hierarchy window, select the joint you want to rotate.
In the Scene window, click a rotation orbit and drag the joint to the desired angle.
- Repeat these two steps until you achieve the desired pose.
Saving the left hand pose:
- In the Hierarchy window, drag hand_l_hand_world to the Project window.
- In the Project window, rename this transform to something descriptive, for example: poseBunnyRabbitLeft.
Using the left hand pose:
- In the Hierarchy window, select LocalAvatar.
- Drag poseBunnyRabbitLeft from the Project window to the Left Hand Custom Pose field in the Inspector Window.
Click Play again. You will see that the left hand is now frozen in our custom bunny grip pose.
Settings for Rift Stand-alone Builds
To make Rift avatars appear in stand-alone executable builds, we need to change two settings:
- Add the Avatar shaders to the Always Included Shaders list in your project settings:
- Click Edit > Project Settings > Graphics.
- Under Always Included Shaders, add +3 to the Size and then press Enter.
- Add the following shader elements: AvatarSurfaceShader, AvatarSurfaceShaderPBS, AvatarSurfaceShaderSelfOccluding.
- Build as a 64-bit application:
- Click File > Build Settings.
- Set Architecture to x86_x64.
Making Rift Hands Interact with the Environment
To allow avatars to interact with objects in their environment, use the OVRGrabber and OVRGrabble components. For a working example, see the AvatarWithGrab sample scene included in the Oculus Unity Sample Framework. | https://developer3.oculus.com/documentation/avatarsdk/latest/concepts/avatars-sdk-unity/ | CC-MAIN-2017-17 | refinedweb | 1,110 | 57.47 |
Writing Point Cloud data to PCD files
In this tutorial we will learn how to write point cloud data to a PCD file.
The code
First, create a file called, let’s say,
pcd_write.cpp in your favorite
editor, and place the following code inside it:
The explanation
Now, let’s break down the code piece by piece.
#include <pcl/io/pcd_io.h> #include <pcl/point_types.h>
The first file is the header that contains the definitions for PCD I/O
operations, and second one contains definitions for several point type
structures, including
pcl::PointXYZ that we will use.
pcl::PointCloud<pcl::PointXYZ> cloud;
describes the templated PointCloud structure that we will create. The type of
each point is set to
pcl::PointXYZ, which is a structure that has
x,
y, and
z fields.
The lines:
// Fill in the cloud data cloud.width = 5; cloud.height = 1; cloud.is_dense = false;); }
fill in the PointCloud structure with random point values, and set the appropriate parameters (width, height, is_dense).
Then:
pcl::io::savePCDFileASCII ("test_pcd.pcd", cloud);
saves the PointCloud data to disk into a file called test_pcd.pcd
Finally:
std::cerr << "Saved " << cloud.points.size () << " data points to test_pcd.pcd." << std::endl; for (std::size_t i = 0; i < cloud.points.size (); ++i) std::cerr << " " << cloud.points[i].x << " " << cloud.points[i].y << " " << cloud.points[i].z << std::endl;
is used to show the data that was generated.
Compiling and running the program
Add the following lines to your CMakeLists.txt file:
After you have made the executable, you can run it. Simply do:
$ ./pcd_write
You will see something similar
You can check the content of the file test_pcd.pcd, using:
$ cat test_pcd.pcd # .PCD v.5 - Point Cloud Data file format FIELDS x y z SIZE 4 4 4 TYPE F F F WIDTH 5 HEIGHT 1 POINTS 5 DATA ascii 0.35222 -0.15188 -0.1064 -0.39741 -0.47311 0.2926 -0.7319 0.6671 0.4413 -0.73477 0.85458 -0.036173 -0.4607 -0.27747 -0.91676 | http://pointclouds.org/documentation/tutorials/writing_pcd.php | CC-MAIN-2020-05 | refinedweb | 339 | 76.01 |
Namespace definitions can be split across multiple files and still have the same name, is that what you mean by "shared"?
// file1.cpp namespace foo { class A {}; class B {}; } // file2.cpp namespace foo { class C {}; class D {}; }
When the project is built, those two namespaces are merged because they have the same name, so concerning membership in the namespace it's as if you wrote it like this:
namespace foo { class A {}; class B {}; class C {}; class D {}; }
No such thing as "shared" namespaces. A namespace is a namespace, it doesn't have "shared" attribute. A namespace is just something that better organizes the data, classes, structures and code. It's main purpose it to avoid name clashes. Without namespaces all names must be unique throughout the program (with a few exceptions). If I declared a variable named
int x; in one *.cpp file and declared a variable with the same name in another *.cpp file the liker would most likely complain that it found two variables with the same name. But if I put each of those variables in different namespaces then the linker would have no problem because each would actually be different variables. | https://www.daniweb.com/programming/software-development/threads/445096/shared-and-separate-name-space | CC-MAIN-2016-07 | refinedweb | 196 | 79.19 |
The uWSGI FastRouter¶
For advanced setups uWSGI includes the “fastrouter” plugin, a proxy/load-balancer/router speaking the uwsgi protocol. It is built in by default. You can put it between your webserver and real uWSGI instances to have more control over the routing of HTTP requests to your application servers.
Getting started¶
First of all you have to run the fastrouter, binding it to a specific address. Multiple addresses are supported as well.
uwsgi --fastrouter 127.0.0.1:3017 --fastrouter /tmp/uwsgi.sock --fastrouter @foobar
Note
This is the most useless Fastrouter setup in the world.
Congratulations! You have just run the most useless Fastrouter setup in the world. Simply binding the fastrouter to a couple of addresses will not instruct it on how to route requests. To give it intelligence you have to tell it how to route requests.
Way 1: –fastrouter-use-base¶
This option will tell the fastrouter to connect to a UNIX socket with the same name of the requested host in a specified directory.
uwsgi --fastrouter 127.0.0.1:3017 --fastrouter-use-base /tmp/sockets/
If you receive a request for example.com the fastrouter will forward the request to /tmp/sockets/example.com.
Way 2: –fastrouter-use-pattern¶
Same as the previous setup but you will be able to use a pattern, with %s mapping to the requested key/hostname.
uwsgi --fastrouter 127.0.0.1:3017 --fastrouter-use-base /tmp/sockets/%s/uwsgi.sock
Requests for example.com will be mapped to /tmp/sockets/example.com/uwsgi.sock.
Way 3: –fastrouter-use-cache¶
You can store the key/value mappings in the uWSGI cache. Choose a way to fill the cache, for instance a Python script like this...
import uwsgi # Requests for example.com on port 8000 will go to 127.0.0.1:4040 uwsgi.cache_set("example.com:8000", "127.0.0.1:4040") # Requests for unbit.it will go to 127.0.0.1:4040 with the modifier1 set to 5 (perl/PSGI) uwsgi.cache_set("unbit.it", "127.0.0.1:4040,5")
Then run your Fastrouter-enabled server, telling it to run the script first.
uwsgi --fastrouter 127.0.0.1:3017 --fastrouter-use-cache --cache 100 --file foobar.py
Way 4: –fastrouter-subscription-server¶
This is probably one of the best way for massive auto-scaling hosting. It uses the subscription server to allow instances to announce themselves and subscribe to the fastrouter.
uwsgi --fastrouter 127.0.0.1:3017 --fastrouter-subscription-server 192.168.0.100:7000
This will spawn a subscription server on address 192.168.0.100 port 7000
Now you can spawn your instances subscribing to the fastrouter:
uwsgi --socket :3031 -M --subscribe-to 192.168.0.100:7000:example.com uwsgi --socket :3032 -M --subscribe-to 192.168.0.100:7000:unbit.it,5 --subscribe-to 192.168.0.100:7000:uwsgi.it
As you probably noted, you can subscribe to multiple fastrouters, with multiple keys. Multiple instances subscribing to the same fastrouter with the same key will automatically get load balanced and monitored. Handy, isn’t it? Like with the caching key/value store, modifier1 can be set with a comma. (,5 above) Another feature of the subscription system is avoiding to choose ports. You can bind instances to random port and the subscription system will send the real value to the subscription server.
uwsgi --socket 192.168.0.100:0 -M --subscribe-to 192.168.0.100:7000:example.com
Way 5: –fastrouter-use-code-string¶
If Darth Vader wears a t-shirt with your face (and in some other corner cases too), you can customize the fastrouter with code-driven mappings. Choose a uWSGI-supported language (like Python or Lua) and define your mapping function.
def get(key): return '127.0.0.1:3031'
uwsgi --fastrouter 127.0.0.1:3017 --fastrouter-use-code-string 0:mapper.py:get
This will instruct the fastrouter to load the script mapper.py using plugin (modifier1) 0 and call the ‘get’ global, passing it the key. In the previous example you will always route requests to 127.0.0.1:3031. Let’s create a more advanced system, for fun!
domains = {} domains['example.com'] = {'nodes': ('127.0.0.1:3031', '192.168.0.100:3032'), 'node': 0} domains['unbit.it'] = {'nodes': ('127.0.0.1:3035,5', '192.168.0.100:3035,5'), 'node': 0} DEFAULT_NODE = '192.168.0.1:1717' def get(key): if key not in domains: return DEFAULT_NODE # get the node to forward requests to nodes = domains[key]['nodes'] current_node = domains[key]['node'] value = nodes[current_node] # round robin :P next_node = current_node + 1 if next_node >= len(nodes): next_node = 0 domains[key]['node'] = next_node return value
uwsgi --fastrouter 127.0.0.1:3017 --fastrouter-use-code-string 0:megamapper.py:get
With only few lines we have implemented round-robin load-balancing with a fallback node. Pow! You could add some form of node monitoring, starting threads in the script, or other insane things. (Be sure to add them to the docs!)
Attention
Remember to not put blocking code in your functions. The fastrouter is totally non-blocking, do not ruin it!
Notes¶
- The fastrouter uses the following vars (in order of precedence) to choose a key to use:
- UWSGI_FASTROUTER_KEY - the most versatile, as it doesn’t depend on the request in any way
- HTTP_HOST
- SERVER_NAME
- You can increase the number of async events the fastrouter can manage (by default it is system-dependent) using –fastrouter-events
You can change the default timeout with –fastrouter-timeout By default the fastrouter will set fd socket passing when used over unix sockets. If you do not want it add –no-fd-passing | http://uwsgi-docs.readthedocs.org/en/latest/Fastrouter.html | CC-MAIN-2014-42 | refinedweb | 950 | 60.41 |
Suppose we have a string s. We have to check whether an anagram of that string is forming a palindrome or not.
So, if the input is like s = "aarcrec", then the output will be True one anagram of this string is "racecar" which is palindrome.
To solve this, we will follow these steps −
Let us see the following implementation to get better understanding −
from collections import defaultdict def solve(s): freq = defaultdict(int) for char in s: freq[char] += 1 odd_count = 0 for f in freq.values(): if f % 2 == 1: odd_count += 1 if odd_count > 1: return False return True s = "aarcrec" print(solve(s))
"aarcrec"
True | https://www.tutorialspoint.com/check-if-any-anagram-of-a-string-is-palindrome-or-not-in-python | CC-MAIN-2021-43 | refinedweb | 108 | 67.69 |
Advertisements
Hi,
While opening a file in Java developers are using following type of code:
File myFile=new File("myfile.txt");
I was trying to find some tutorials about handling file in Java on net. I have also seen many examples where developers were using the following classes:
FileInputStream
DataInputStream
BufferedReader
I am trying to find the good information about these classes but unable to understand the correct use of these classes. I don't know when and how to use these classes.
Can anyone explain me what are the use of these classes?
Is there example code for explaining the File Handling process in Java?
Thanks
Hi,
Java programming language comes with many Interfaces and Classes packaged in the java.io package for easy handling of files. Classes and Interfaces of java.io package can be used to read any type of file.
Java also cares about the performance of the application. It provides BufferedReader class for reading the text file/stream using the buffer. So, The BufferedReader class is used when there is requirement of reading the stream using the buffer. The BufferedReader class increases the performance of the application.
FileInputStream class is used to FileInputStream class is used to read the data from byte steam. It is used to read the data into bytes from the Stream.
The DataInputStream class is used to read primitive Java data types from the input stream.
Read the tutorial Java Read File Line by Line for detailed description and example code.
Hope above explanation will help you understanding the concepts.
Thanks
Hi,
Let me explain you about the File class. The File class is used to get the handle of the file. It does not any method to read the data from file itself.
We have to use the other classes to read the file data. The File class is used to just read the information metadata about the file like size, created date etc.
See the example of Reading data from file.
Thanks
Hi,
Here is the example code of reading file one line at a time and then printing the data on console.
import java.io.*; public class ReadFileExample { public static void main(String[] args) { System.out.println("This is an example of reading file one line at a time"); //Create new file. File file = new File("filetoread.txt"); //Declare the BufferedReader bufferedReader = null; try { bufferedReader = new BufferedReader(new FileReader(file)); String line; //Read one line and then print while ((line = bufferedReader.readLine()) != null) { System.out.println(line); } /* Exception handling */ } catch (IOException e) { e.printStackTrace(); } catch (Exception e) { e.printStackTrace(); } finally { try { //Finally try to close the reader bufferedReader.close(); } catch (Exception ex) { ex.printStackTrace(); } } } }
Thanks
Hi,
Here is very simple example of reading file line by line:
import java.io.*; public class ReadFileExample { public static void main(String[] args) { System.out.println("This is an example of reading file one line at a time"); try { BufferedReader br = new BufferedReader(new FileReader("filetoread.text")); String lineData; while( (lineData = br.readLine()) != null ) { System.out.println(lineData); } br.close(); } catch(Exception e) { System.out.println(e.getMessage()); } } }
You can add the Exception handling logic yourself in your program.
View more tutorials at Learn how to handle files in Java with Examples and Tutorials.
Thanks | http://www.roseindia.net/answers/viewqa/Java-Beginners/30876-File-Handling-in-Java.html | CC-MAIN-2014-10 | refinedweb | 543 | 59.09 |
- Creating users
- User profile
- Profile settings
- Changing your password
-.
Unknown sign-in
GitLab notifies you if a sign-in occurs that is from an unknown IP address or device. See Unknown Sign-In Notification for more details.
User password
- Navigate to your profile’s Settings > Password.
- Enter your current password in the ‘Current password’ field.
- Enter your desired new password twice, once in the ‘New password’ field and once in the ‘Password confirmation’ field.
- Click the ‘Save password’ button.
If you don’t know your current password, select the ‘I forgot my password’ link.
Changing your username
Your
username is a unique
namespace
related to your user ID. Changing it can have unintended side effects, read
how redirects behave
before proceeding.
To change your
username:
- Navigate to your profile’s Settings > Account.
- Enter a new username under Change username.
- Click Update username.
Private.
- Alternatively, select the Busy checkbox (Introduced in GitLab 13.6}.
- Click Add status emoji (smiley face), and select the desired emoji.
- Click Update profile settings.
You can also set your current status using the API.
If you previously selected the “Busy” checkbox, remember to deselect it when you become available again..
Increased. | https://docs.gitlab.com/ee/user/profile/ | CC-MAIN-2020-50 | refinedweb | 195 | 60.92 |
Challenging CSS Best Practices
- By Thierry Koblentz
- October 21st, 2013
- 208 Comments
Editor’s Note: This article features techniques that are used in practice by Yahoo! and question coding techniques that we are used to today. You might be interested in reading Decoupling HTML From CSS1 by Jonathan Snook, On HTML Elements Identifiers2 by Tim Huegdon and Atomic Design With Sass3 by Robin Rendle as well. Please keep in mind: some of the mentioned techniques are not considered to be best practices.
When it comes to CSS, I believe that the sacred principle of “separation of concerns4” ”:
- structure,
- presentation,
- behavior.5’s excellent project CSS Zen Garden6. CSS Zen Garden is what most — if not all — developers consider to be the standard for how to author style sheets.
The Standard
To help me illustrate issues related to today’s best practices, I’ll use a very common pattern: the media object7. Its combination of markup and CSS will be our starting point.
Markup
In our markup, a wrapper (
div.media) contains an image wrapped in a link (
a.img), followed by a div (
div.bd):
<div class="media"> <a href="" class="img"> <img src="thierry.jpg" alt="me" width="40" /> </a> <div class="bd"> @thierrykoblentz 14 minutes ago </div> </div>
CSS
Let’s give a 10-pixel margin to the wrapper and style both the wrapper and
div.bd as block-formatting contexts8 (BFC). In other words, the wrapper will contain the floated link, and the content of
div.bd will not wrap around said link. A gutter between the image and text is created with a 10-pixel margin (on the float):
.media { margin: 10px; } .media, .bd { overflow: hidden; _overflow: visible; zoom: 1; } .media .img { float: left; margin-right: 10px; } .media .img img { display: block; }
Result
Here is the presentation of the wrapper, with the image in the link and the blob of text:
A New Requirement Comes In
Suppose we now need to be able to display the image on the other side of the text as well.
Markup
Thanks to the magic of BFC, all we need to do is change the styles of the link. For this, we use a new class,
imgExt.
<div class="media"> <a href="" class="imgExt"> <img src="thierry.jpg" alt="me" width="40" /> </a> <div class="bd"> @thierrykoblentz 14 minutes ago </div> </div>
CSS
We’ll add an extra rule to float the link to the right and change its margin:
.media { margin: 10px; } .media, .bd { overflow: hidden; _overflow: visible; zoom: 1; } .media .img { float: left; margin-right: 10px; } .media .img img { display: block; } .media .imgExt { float: right; margin-left: 10px; }
Result
The image is now displayed on the opposite side:
One More Requirement Comes In
Suppose we now need to make the text smaller when this module is inside the right rail of the page. To do that, we create a new rule, using
#rightRail as a contextual selector:
Markup
Our module is now inside a
div#rightRail container:
<div id="rightRail"> <div class="media"> <a href="" class="img"> <img src="thierry.jpg" alt="me" width="40" /> </a> <div class="bd"> @thierrykoblentz 14 minutes ago </div> </div> </div>
CSS
Again, we create an extra rule, this time using a descendant selector,
#rightRail .bd.
.media { margin: 10px; } .media, .bd { overflow: hidden; _overflow: visible; zoom: 1; } .media .img { float: left; margin-right: 10px; } .media .img img { display: block; } .media .imgExt { float: right; margin-left: 10px; } #rightRail .bd { font-size: smaller; }
Result
Here is our original module, showing inside
div#rightRail:
What’s Wrong With This Model?
- Simple changes to the style of our module have resulted in new rules in the style sheet.
There must be a way to style things without always having to write more CSS rules.
- We are grouping selectors for common styles (
.media,.bd {}).
Grouping selectors, rather than using a class associated with these styles, will lead to more CSS.
- Of our six rules, four are context-based.
Rules that are context-specific are hard to maintain. Styles related to such rules are not very reusable.
- RTL and LTR interfaces become complicated.
To change direction, we’d need to overwrite some of our styles (i.e. write more rules). For example:
.rtl .media .img { margin-right: auto; /* reset */ float: right; margin-left: 10px; } .rtl .media .imgExt { margin-left: auto; /* reset */ float: left; margin-right: 10px; }
Meet Atomic Cascading Style Sheet
As we all know, the smaller the unit, the more reusable it is.
“Treat code like Lego. Break code into the smallest little blocks possible.” — @csswizardry13 (via @stubbornella14) #btconf15
— Smashing Magazine (@smashingmag) May 27, 201316
To break down styles into irreducible units, we can map classes to a single style, rather than many. This will result in a more granular palette of rules, which in turn improves reusability.
Let’s revisit the media object using this new approach.
Markup
We are using five classes, none of which are related to content:
<div class="Bfc M-10"> <a href="" class="Fl-start Mend-10"> <img src="thierry.jpg" alt="me" width="40" /> </a> <div class="Bfc Fz-s"> @thierrykoblentz 14 minutes ago </div> </div>
CSS
Each class is associated with one particular style. For the most part, this means we have one declaration per rule.
.Bfc { overflow: hidden; zoom: 1; } .M-10 { margin: 10px; } .Fl-start { float: left; } .Mend-10 { margin-right: 10px; } .Fz-s { font-size: smaller; }
Result
What Is This about?
Let’s ignore the class names for now and focus on what this does (or does not):
- No contextual styling
We do not use contextual or descendant selectors, which means that our style sheet has no dead weight.
- Directions (left and right) are “abstracted.”
Rather than overwriting styles, we serve a RTL style sheet that contains rules such as these:
.Fl-start { float: right; } .Mend-10 { margin-left: 10px; }
Same classes, same properties, different values.
But the most important thing to notice here is that we are styling via markup. We have changed the context in which we style our modules. We are now editing HTML templates instead of style sheets.
I believe that this approach is a game-changer because it narrows the scope dramatically. We are styling not in the global scope (the style sheet), but at the module and block level. We can change the style of a module without worrying about breaking something else on the page. And we can do this without adding any rule to the style sheet, let alone creating a new class and rule:
.someBasicStyleForThisElementHere {...}
We get no redundancy. Selectors are not duplicated, and styles belong to a single rule instead of being part of many. For example, the style sheets that this page links to contain 72
float declarations.
Also, abandoning a style — for example, deciding to always keep the image on the left side of the module — does not make any of our rules obsolete.
Sound Good?
Not sold yet? I hear you saying, “This goes against every single rule in the book. This is no better than inline styling. And your class names are not only cryptic, but unsemantic, too!”
Fair enough. Let’s address these concerns.
Regarding Unsemantic Class Names
If you check the W3C’s “Tips for Webmasters18,” where it says “Good names don’t change,” you’ll see that the argument is about maintenance, not semantics per se. All it says is that changing styles is easier in a CSS file than in multiple HTML files.
.border4px would be a bad name only if changing the style of an element required us to change the declaration that that class name is associated with. In other words:
.border4px {border-width:2px;}
Regarding Cryptic Class Names
For the most part, these class names follow the syntax of Zen Coding — see the “Zen Coding Cheat Sheet19” (PDF) — now renamed Emmet20. In other words, they are simple abbreviations.
There are exceptions for styles associated with direction (left and right) and styles that involve a combination of declarations. For example,
Bfc stands for “block-formatting context.”
Regarding Mimicking Inline Styles
Hopefully, the diagram below clears things up:
- Specificity
The technique is not as specific as
@style. It lowers style weight because rules rely on a single class, as opposed to rules like
.parent .bd {}, which clocks in at 0.0.2.0 (see “CSS Specificity: Things You Should Know22”).
- Verbosity
Most classes are abbreviations of declarations (for example,
M-10versus
margin: 10px). Some classes, such as
Bfc, refer to more than one style (see “Mapping” in the diagram above). Other classes use “start” and “end” keywords, rather than left and right values (see “Abstraction” in the diagram above).
Here are the advantages of
@style:
- Scope
Styles are “sandboxed” to the nodes they are attached to.
- Portability
Because the styles are “encapsulated,” you can move modules around without losing their styles. Of course, we still need the style sheet; however, because we are making context irrelevant, modules can live anywhere on a page, website or even network.
The Path To Bloat
Because the styles of our module are tied only to presentational class names, they can be anything we want them to be. For example, if we need to create a simple two-column layout, all we need to do is replace the link with a
div in our template. That would look like this:
<div class="Bfc M-10"> <div class="Fl-start Mend-10 W-25"> column 1 </div> <div class="Bfc"> column 2 </div> </div>
And we would need only one extra rule in the style sheet:
.Bfc { overflow: hidden; zoom: 1; } .M-10 { margin: 10px; } .Fl-start { float: left; } .Mend-10 { margin-right: 10px; } .Fz-s { font-size: smaller; } .W-50 { width: 50%; }
Compare this to the traditional way:
<div class="wrapper"> <div class="sidebar"> column 1 </div> <div class="content"> sidebar </div> </div>
This would require us to create three new classes, to add an extra rule and to group selectors.
.wrapper, .content, .media, .bd { overflow: hidden; _overflow: visible; zoom: 1; } .sidebar { width: 50%; } .sidebar, .media .img { float: left; margin-right: 10px; } .media .img img { display: block; }
I think the code above pretty well demonstrates the price we pay for following the SoC principle. In my experience, all it does is grow style sheets.
Moreover, the larger the files, the more complex the rules and selectors become. And then no one would dare edit the existing rules:
- We leave alone rules that we suspect to be obsolete for fear of breaking something.
- We create new rules, rather than modify existing ones, because we are not sure the latter is 100% safe.
In other words, we make things worse because we can get away with bloat.
Nowadays, people are accustomed to very large style sheets, and many authors think they come with the territory. Rather than fighting bloat, they use tools (i.e. preprocessors) to help them deal with it. Chris Eppstein tells us23:
“LinkedIn has over 1,100 Sass files (230k lines of SCSS) and over 90 web developers writing Sass every day.”
CSS Bloat vs. HTML Bloat
Let’s face it: the data has to live somewhere. Consider these two blocks:
<div class="sidebar">
<div class="Fl-start Mend-10 W-25">
In many cases, the “semantic” class name makes up more bytes than the presentational class name (
.wrapper versus
.Bfc). But I do not think this is a real concern compared to what most apps onboard these days via
data- attributes.
This is where gzip24 comes into play, because the high redundancy in class names across a document would achieve better compression. And the same is true of style sheets, in which we have many redundant sequences:
.M-1 {margin: 1px;} .M-2 {margin: 2px;} .M-4 {margin: 4px;} .M-6 {margin: 6px;} .M-8 {margin: 8px;} etc.
Caching
Presentational rules do not change. Style sheets made from such rules mature into tool sets in which authors can find everything they need. By their nature, they stop growing and become immutable, and immutable is cache-friendly.
No More .button Class?
The technique I’m discussing here is not about banning “semantic” class names or rules that group many declarations. The idea is to reevaluate the benefits of the common approach, rather than adopting it as the de facto technique for styling Web pages. In other words, we are restricting the “component” approach to the few cases in which it makes the most sense.
For example, you may find the following rules in our style sheets, rules that set styles for which we do not create simple classes or rules that ensure cross-browser support.
.button { display: inline-block; *display: inline; zoom: 1; font-size: bold 16px/2em Arial; height: 2em; box-shadow: inset 1px 1px 2px 0px #fff; background: -webkit-gradient(linear, left top, left bottom, color-stop(0.05, #ededed), color-stop(1, #dfdfdf)); background: linear-gradient(center top, #ededed 5%, #dfdfdf 100%); filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#ededed', endColorstr='#dfdfdf'); background-color: #ededed; color: #777; text-decoration: none; text-align: center; text-shadow: 1px 1px 2px #ffffff; border-radius: 4px; border: 2px solid #dcdcdc; } .modal { position: fixed; top: 50%; left: 50%; -webkit-transform: translate(-50%,-50%); -ms-transform: translate(-50%,-50%); transform: translate(-50%,-50%); *width: 600px; *margin-left: -300px; *top: 50px; } @media \0screen { .modal { width: 600px; margin-left: -300px; top: 50px; } }
On the other hand, you would not see rules like the ones below (i.e. styles bound to particular modules), because we prefer to apply these same styles using multiple classes: one for font size, one for color, one for floats, etc.
.news-module { font-size: 14px; color: #555; float: left; width: 50%; padding: 10px; margin-right: 10px; } .testimonial { font-size: 16px; font-style: italic; color: #222; padding: 10px; }
Do We Include Every Possible Style In Our Style Sheet?
The idea is to have a pool of rules that authors can choose from to style anything they want. Styles that are common enough across a website would become part of the style sheet. If a style is too specific, then we’d rely on
@style (the style attribute). In other words, we’d prefer to pollute the markup rather than the style sheet. The primary goal is to create a sheet made of rules that address various design patterns, from a basic rule that floats an element to “helper” classes.
/** * one liner with ellipsis * 1. we inherit hyphens:auto from body, which would break "Ell" in table cells */ .Ell { max-width: 100%; white-space: nowrap; overflow: hidden; text-overflow: ellipsis; -webkit-hyphens: none; /* 1 */ -ms-hyphens: none; -o-hyphens: none; hyphens: none; } /** * kinda line-clamp * two lines according to default font-size and line-height */ .LineClamp { display: -webkit-box; -webkit-line-clamp: 2; -webkit-box-orient: vertical; font-size: 13px; line-height: 1.25; max-height: 32px; _height: 32px; overflow: hidden; } /** * reveals an hidden element on :hover or :focus * visibility can be forced by applying the class "RevealNested-on" * IE8+ */ :root .NestedHidden { opacity: 0; } :root .NestedHidden:focus, :root .RevealNested:hover .NestedHidden, :root .RevealNested-on .NestedHidden { opacity: 1; }
How Does This Scale?
We have just released a brand new My Yahoo25, which relies heavily on this technique. This is how it compares to a few other Yahoo products (after gzip’ing):
Our style sheet weighs 17.9 KB (about 3 KB of which are property-specific), and it is shareable (unlike the style sheets of other properties). The reason for this is that none of the rules it contains relate to content.
Wrapping Up
Because presentational class names have always been deemed “out of bounds,” we — the community — have not really investigated what their use entails. In fact, in the name of best practice, we’ve dismissed every opportunity to explore their potential benefits.
Here at Yahoo, @renatoiwa26, @StevenRCarlson27 and I28 are developing projects with this new CSS architecture29. The code appears to be predictable, reusable, maintainable and scalable. These are the results we’ve experienced so far:
- Less bloat
We can build entire modules without adding a single line to the style sheets.
- Faster development
Styles are driven by classes that are not related to content, so we can copy and paste existing modules to get started.
- RTL interface for free
Using start and end keywords makes a lot of sense. It saves us from having to write extra rules for RTL context.
- Better caching
A huge chunk of CSS can be shared across products and properties.
- Very little maintenance (on the CSS side)
Only a small set of rules are meant to change over time.
- Less abstraction
There is no need to look for rules in a style sheet to figure out the styling of a template. It’s all in the markup.
- Third-party development
A third party can hand us a template without having to attach a style sheet (or a
styleblock) to it. No custom rules from third parties means no risk of breakage due to rules that have not been properly namespaced.
(Note that if maintenance is easier on the CSS side than on the HTML side, then the reason is simply that we can cheat on the CSS side by not cleaning up rules. But if we were required to keep things lean and clean, then the pain would be the same.)
Final Note
I was at a meetup30 a couple of weeks ago, where I heard Colt McAnlis say, “Tools, not rules31.” A quick search for this idiom returned this32:
“We all need to be open to new learnings, new approaches, new best practices and we need to be able to share them.”
(al, ea)
Footnotes
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
↑ Back to top Tweet itShare on Facebook | http://www.smashingmagazine.com/2013/10/21/challenging-css-best-practices-atomic-approach/ | CC-MAIN-2015-27 | refinedweb | 2,976 | 73.27 |
Analyzing NEOs¶.
[1]:
from astropy import time from poliastro.twobody.orbit import Orbit from poliastro.bodies import Earth from poliastro.plotting import StaticOrbitPlotter
Small Body Database (SBDB)¶
[2]:
eros = Orbit.from_sbdb("Eros") eros.plot(label="Eros");
You can also search by IAU number or SPK-ID (there is a faster
neows.orbit_from_spk_id() function in that case, although):
[3]:
ganymed = Orbit.from_sbdb("1036") # Ganymed IAU number amor = Orbit.from_sbdb("2001221") # Amor SPK-ID eros = Orbit.from_sbdb("2000433") # Eros SPK-ID frame = StaticOrbitPlotter() frame.plot(ganymed, label="Ganymed") frame.plot(amor, label="Amor") frame.plot(eros, label="Eros");
You can use the wildcards from that browser:
* and
?.
Keep it in mind that
from_sbdb() can only return one Orbit, so if several objects are found with that name, it will raise an error with the different bodies.
[4]:
Orbit.from_sbdb("*alley")
--------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-4-0d6a09af6900> in <module> ----> 1 Orbit.from_sbdb("*alley") ~/Development/poliastro/poliastro-library/src/poliastro/twobody/orbit.py in from_sbdb(cls, name, **kargs) 497 498 raise ValueError( --> 499 str(obj["count"]) + " different objects found: \n" + objects_name_in_str 500 ) 501 ValueError: 6 different objects found: 903 Nealley (A918 RH) 2688 Halley (1982 HG1) 14182 Alley (1998 WG12) 21651 Mission Valley (1999 OF1) 36445 Smalley (2000 QU) 1P/Halley
Note that epoch is provided by the service itself, so if you need orbit on another epoch, you have to propagate it:
[5]:
eros.epoch.iso
[5]:
'2019-04-27 00:01'
[6]:
epoch = time.Time(2458000.0, scale="tdb", format="jd") eros_november = eros.propagate(epoch) eros_november.epoch.iso
[6]:
'2017-09-03 12:00'():
[7]:
from poliastro.neos import dastcom5
[8]:
atira = dastcom5.orbit_from_name("atira")[0] # NEO wikipedia = dastcom5.orbit_from_name("wikipedia")[0] # Asteroid, but not NEO. frame = StaticOrbit:
[9]:
halleys = dastcom5.orbit_from_name("1P") frame = StaticOrbit:
[10]:
ast_db = dastcom5.asteroid_db() comet_db = dastcom5.comet_db() ast_db.dtype.names[ :20 ] # They are more than 100, but that would be too much lines in this notebook :P
[10]:
('NO', 'NOBS', 'OBSFRST', 'OBSLAST', 'EPOCH', 'CALEPO', 'MA', 'W', 'OM', 'IN', 'EC', 'A', 'QR', 'TP', 'TPCAL', 'TPFRAC', 'SOLDAT', 'SRC1', 'SRC2', 'SRC3')
Asteroid and comet parameters are not exactly the same (although they are very close).
[11]:
aphelion_condition = 2 * ast_db["A"] - ast_db["QR"] < 0.983 axis_condition = ast_db["A"] < 1.3 atiras = ast_db[aphelion_condition & axis_condition]
The number of
Atira NEOs we use using this method is:
[12]:
len(atiras)
[12]:
16
Which is consistent with the stats published by CNEOS
Now we’re gonna plot all of their orbits, with corresponding labels, just because we love plots :), using the
ASTNAM property of DASTCOM5 database:
[14]:
from poliastro.bodies import Earth earth = Orbit.from_body_ephem(Earth) frame = StaticOrbitPlotter() frame.plot(earth, label=Earth) for record in atiras["NO"]: ss = dastcom5.orbit_from_record(record).to_icrs() frame.plot(ss, color="#666666")
If we needed also the names of each asteroid, we could do:
[15]:
frame = StaticOrbitPlotter() frame.plot(earth, label=Earth) for i in range(len(atiras)): record = atiras["NO"][i] label = atiras["ASTNAM"][i].decode().strip() # DASTCOM5 strings are binary ss = dastcom5.orbit_from_record(record).to_icrs() frame.plot(ss, label=label)
We knew beforehand that there are no
Atira comets, only asteroids (comet orbits are usually more eccentric), but we could use the same method with
com_db if:
[16]:
db = dastcom5.entire_db() db.columns
[16]:):
[17]:
db[ db.NAME == "Halley" ] # As you can see, Halley is the name of an asteroid too, did you know that?
[17]:
Panda offers many functionalities, and can also be used in the same way as the
ast_db and
comet_db functions:
[18]:
aphelion_condition = (2 * db["A"] - db["QR"]) < 0.983 axis_condition = db["A"] < 1.3 atiras = db[aphelion_condition & axis_condition]
[19]:
len(atiras)
[19]:
349
What? I said they can be used in the same way!
Dont worry :) If you want to know what’s happening here, the only difference is that we are now working with comets too, and some comets have a negative semi-major axis!
[20]:
len(atiras[atiras.A < 0])
[20]:
333
So, rewriting our condition:
[21]:
axis_condition = (db["A"] < 1.3) & (db["A"] > 0) atiras = db[aphelion_condition & axis_condition] len(atiras)
[21]:
16 | https://docs.poliastro.space/en/v0.13.0/examples/Using%20NEOS%20package.html | CC-MAIN-2019-39 | refinedweb | 678 | 50.53 |
SD_BUS_GET_FD(3) sd_bus_get_fd SD_BUS_GET_FD(3)
sd_bus_get_fd - Get the file descriptor connected to the message bus
#include <systemd/sd-bus.h> int sd_bus_get_fd(sd_bus *bus);
sd_bus_get_fd() returns the file descriptor used to communicate with the message bus. This descriptor can be used with select(3), poll(3), or similar functions to wait for incoming messages. If the bus was created with the sd_bus_set_fd(3) function, then the input_fd used in that call is returned.
Returns the file descriptor used for incoming messages from the message bus.
systemd(1), sd-bus(3), sd_bus_set_f_GET_FD(3)
Pages that refer to this page: sd_bus_process(3), systemd.index(7) | http://man7.org/linux/man-pages/man3/sd_bus_get_fd.3.html | CC-MAIN-2017-47 | refinedweb | 103 | 65.22 |
Converting Qt 5 Projects into Qt 6 Projects
Qt Design Studio supports creating UIs with Qt 6 in addition to Qt 5. However, to make a project that uses Qt 5 use Qt 6, you have to be aware of a few differences and issues that are discussed in this topic.
Font Loader
Projects that were created with Qt Design Studio 2.1 use
FontLoader in a way that is not supported in Qt 6. Specifically, the
name property is read-only in Qt 6. Therefore, you must modify the
Constants.qml file to have fonts loaded correctly. You can either remove the
FontLoader or switch to using the
source property instead of the
name property.
To remove the
FontLoader, delete the following line from the
Constants.qml file:
readonly property FontLoader mySystemFont: FontLoader { name: "Arial" }
Then, remove the following lines that contain references to mySystemFont:
readonly property font font: Qt.font({ family: mySystemFont.name, pixelSize: Qt.application.font.pixelSize }) readonly property font largeFont: Qt.font({ family: mySystemFont.name, pixelSize: Qt.application.font.pixelSize * 1.6 })
Alternatively, you can keep the
FontLoader and use the
source property instead of the
name property. If you are unsure about how to do this, you can replace the
Constants.qmlâ¯file with a new one that you create by using Qt Design Studio 2.2.
Qt Quick Studio Components
Qt Quick Studio Components are available in Qt 6, except for the Iso Icon component. It specifies an icon from an ISO 7000 icon library as a Picture component, which is not supported in Qt 6. Therefore, Iso Icon is also not supported in Qt 6.
Qt Quick Studio Effects
2D Effects are only partially supported. The following 2D effects are not available in Qt 6:
- Blend
- Inner Shadow
- Blur effects except:
Substitutes are provided for the obsolete effects to keep Qt 5 based applications working, but the effects will not be rendered as expected.
Qt Quick 3D
In Qt 6, you cannot use the import
import QtQuick3D 1.15, which imports a Qt 5 based Qt Quick 3D module. Qt 6 does not require a version for imports, and therefore it is not used by default. To turn a Qt 5 based project into a Qt 6 based project, you have to adjust the imports in all
.qml files that use Qt Quick 3D by removing the version numbers.
For more information about changes in Qt Quick 3D, see the changes file.
QML
For general information about changes in QML between Qt 5 and Qt 6, see:
The most notable change is that Qt 6 does not require a version for imports anymore.
Qt Design Studio
Projects that support only Qt 6 are marked with
qt6Project: true in the
.qmlproject file. This line is added if you choose Qt 6 in the wizard when creating the project. If the project file does not contain this line, the project will use Qt 5 and a Qt 5 kit by default. You can change this in the project Run Settings, where you can select Qt 6 instead.
Projects that use Qt 6 specific features will not work with Qt 5.â¯This means that projects that are supposed to work with both Qt 5 and Qt 6 require versions for their imports.
Therefore, if you want to use Qt Quick 3D, using the same project with Qt 5 and Qt 6 is not possible.
Available under certain Qt licenses.
Find out more. | https://doc-snapshots.qt.io/qtdesignstudio-master/studio-porting-projects.html | CC-MAIN-2022-40 | refinedweb | 577 | 65.42 |
06 July 2010 17:19 [Source: ICIS news]
By Nigel Davis
LONDON (ICIS news)--In a curious reversal of fortunes, the Gulf Cooperation Council (GCC) countries of Bahrain, Kuwait, Oman, Saudi Arabia and the United Arab Emirates (UAE) are facing gas shortages that analysts believe are likely to worsen.
Gone are the days of ready, low-cost availability. Gas supply has become tight as countries have struggled to find more of valuable natural resources. Demand has risen sharply. The global downturn has exacerbated the situation but also presented some opportunities.
The impact on petrochemicals has been real as the supply of associated gas from reduced oil production has restrained ethane availability to cracker operators.
For some years now, chemical industry watchers have been quick to point out that the regional ethane advantage has dwindled, particularly for new participants.
New facilities will be liquids-based, perhaps with some feedstock flexibility. The cost advantage will be very different from that enjoyed by the current crop of crackers.
The GCC gas shortage, however, will have a much more profound effect on the Gulf states. This is an energy issue writ large in a world of changed gas dynamics.
The recession has eased the pain. Reduced global demand for gas, the growth of supply from shale deposits in ?xml:namespace>
“That supply overhang provides the GCC with a prized short-term opportunity to ease what has become a major energy issue: a shortage of gas,” says management consulting firm Booz & Company.
The GCC gas shortage will worsen through 2015 as supply struggles to keep pace with demand, says Booz. Under a low-growth, continued-recession scenario, the gas shortage might increase from 19bn cubic metres in 2009 to some 31bn in 2015, the company says.
Under a scenario in which growth returns, the shortage that year could be as high as 50bn cubic metres.
“Bahrain, Kuwait, Oman, Saudi Arabia and the United Arab Emirates are facing a reversal of a decades old status quo” and find themselves in uncharted territory, Booz suggests.
Over the long-term, they can address the supply/demand imbalance by raising prices, developing energy efficiencies and energy alternatives, looking at different methods for advanced oil recovery and providing incentives for the international oil companies to participate in the upstream gas sector, it adds.
But those oil companies have not had much success in either
Renegotiating some of these deals, in the light of slower economic growth, could be a life-saver for some of the countries concerned.
“Through analysis, planning, and implementation, GCC countries and energy producers can take measured steps to ensure that they are able to keep the lights on in the most economically viable way for decades to come,” Booz points out.
But what about keeping the chemicals flowing?
Turning to liquids for chemicals production is a natural step for most GCC countries. But even in
The knock-on effect on chemicals is clear. Just this month press reports have suggested that ExxonMobil’s plans for a big cracker and polyethylene (PE) and monoethylene glycol (MEG) plants at Ras Laffan
If the ExxonMobil project founders, that might open up opportunities for Shell and Total, both of which have their own plans for local development.
So speculation, as might be expected, is rife. Finding the right feedstock for chemicals in the Gulf region currently is a tough call. That’s a far cry from the situation not so long ago.
Read John Richardson and Malini Hariharan’s Asian Chemical Connections. | http://www.icis.com/Articles/2010/07/06/9374290/insight+gulf+states+push+to+the+limit+on+natural+gas.html | CC-MAIN-2013-20 | refinedweb | 582 | 51.58 |
I want to run a method in the background of my tkinter frame that will constantly check if certain files exist in a specific folder. As long as the files dont exist, there will be a red
tk.label
tk.label
tk.label
while
Define a function that does whatever you want, and have that function schedule itself to be run again in the future. It will run until the program quits.
This example assumes a global variable named
root that refers to the root window, but any widget reference will work.
def do_something(): <your code here> root.after(3000, do_something)
Call it once to start it, and then it will run forever
do_some_check() | https://codedump.io/share/RzpKFOdUE07r/1/what-is-the-simplest-way-to-run-a-constant-loop-in-a-tkinter-frame | CC-MAIN-2018-17 | refinedweb | 114 | 72.56 |
clone is a tricky method from java.lang.Object class, which is used to create copy of an Object in Java. Intention of clone() method is simple, to provide a cloning mechanism, but some how it's implementation became tricky and has been widely criticized difference between deep copy and shallow copy in Java. In this three part article, we will first see working of clone method in Java, and in second part we will learn how to override clone method in Java, and finally we will discuss deep copy vs shallow copy mechanism. The reason I chose to make this a three part article, is to keep focus on one thing at a time. Since clone() itself is confusing enough, it's best to understand concept one by one. In this post, we will learn what is clone method, what it does and How clone method works in Java. By the way, clone() is one of the few fundamental methods defined by objects, others being equals, hashcode(), toString() along with wait and notify methods.
What is clone of object in Java
An object which is returned by clone() method is known as clone of original instance. A clone object should follow basic characteristics e.g. a.clone() != a, which means original and clone are two separate object in Java heap, a.clone().getClass() == a.getClass() and clone.equals(a), which means clone is exact copy of original object. These characteristic is followed by a well behaved, correctly overridden clone() method in Java, but it's not enforced by cloning mechanism. Which means, an object returned by clone() method may violate any of these rules. By following convention of returning object by calling super.clone(), when overriding clone() method, you can ensure that it follows first two characteristics. In order to follow third characteristic, you must override equals method to enforce logical comparison, instead of physical comparison exists in java.lang.Object. For example, clone() method of Rectangle class in this method return object, which has these characteristics, but if you run same program by commenting equals(), you will see that third invariant i.e. clone.equals(a) will return false. By the way there are couple of good items on Effective Java regarding effective use of clone method, I highly recommend to read those items after going through this article.
How Clone method works in Java
1) Any class calls clone() method on instance, which implements Cloneable and overrides protected clone() method from Object class, to create a copy.
Rectangle rec = new Rectangle(30, 60);
logger.info(rec);
try {
logger.info("Creating Copy of this object using Clone method");
Rectangle copy = rec.clone();
logger.info("Copy " + copy);
} catch (CloneNotSupportedException ex) {
logger.debug("Cloning is not supported for this object");
}
2) Call to clone() method on Rectangle is delegated to super.clone(), which can be a custom super class or by default java.lang.Object
@Override
protected Rectangle clone() throws CloneNotSupportedException {
return (Rectangle) super.clone();
}
3) Eventually call reaches to java.lang.Object's clone() method, which verify if corresponding instance implements Cloneable interface, if not then it throws CloneNotSupportedException, otherwise it creates a field-by-field copy of instance of that class and returned to caller.
So in order for clone() method to work properly, two things need to happen, a Class should implement Cloneable interface and should override clone() method of Object class. By the way this was this was the simplest example of overriding clone method and how it works, things gets more complicated with real object, which contains mutable fields, arrays, collections, Immutable object and primitives, which we will see in second part of this Java Cloning tutorial series.
Java clone() method Example
In this article, we have not seen complexity of overriding clone method in Java, as our Rectangle class is very simple and only contains primitive fields, which means shallow cloning provided by Object's clone() method is enough. But, this example is important to understand process of Object cloning in Java, and How clone method works. Here is complete code of this clone() method overriding example :
import org.apache.log4j.Logger;
/**
* Simple example of overriding clone() method in Java to understand How Cloning of
* Object works in Java.
*
* @author
*/
public class JavaCloneTest {
private static final Logger logger = Logger.getLogger(JavaCloneTest.class);
public static void main(String args[]) {
Rectangle rec = new Rectangle(30, 60);
logger.info(rec);
Rectangle copy = null;
try {
logger.info("Creating Copy of this object using Clone method");
copy = rec.clone();
logger.info("Copy " + copy);
} catch (CloneNotSupportedException ex) {
logger.debug("Cloning is not supported for this object");
}
//testing properties of object returned by clone method in Java
logger.info("copy != rec : " + (copy != rec));
logger.info("copy.getClass() == rec.getClass() : " + (copy.getClass() == rec.getClass()));
logger.info("copy.equals(rec) : " + copy.equals(rec));
//Updating fields in original object
rec.setHeight(100);
rec.setWidth(45);
logger.info("Original object :" + rec);
logger.info("Clonned object :" + copy);
}
}
public class Rectangle implements Cloneable{
private int width;
private int height;
public Rectangle(int w, int h){
width = w;
height = h;
}
public void setHeight(int height) {
this.height = height;
}
public void setWidth(int width) {
this.width = width;
}
public int area(){
return widthheight;
}
@Override
public String toString(){
return String.format("Rectangle [width: %d, height: %d, area: %d]", width, height, area());
}
@Override
protected Rectangle clone() throws CloneNotSupportedException {
return (Rectangle) super.clone();
}
@Override
public boolean equals(Object obj) {
if (obj == null) {
return false;
}
if (getClass() != obj.getClass()) {
return false;
}
final Rectangle other = (Rectangle) obj;
if (this.width != other.width) {
return false;
}
if (this.height != other.height) {
return false;
}
return true;
}
@Override
public int hashCode() {
int hash = 7;
hash = 47 hash + this.width;
hash = 47 hash + this.height;
return hash;
}
}
Output:
2013-05-20 23:46:58,882 0 [main] INFO JavaCloneTest - Rectangle [width: 30, height: 60, area: 1800]
2013-05-20 23:46:58,882 0 [main] INFO JavaCloneTest - Creating Copy of this object using Clone method
2013-05-20 23:46:58,882 0 [main] INFO JavaCloneTest - Copy Rectangle [width: 30, height: 60, area: 1800]
2013-05-20 23:46:58,882 0 [main] INFO JavaCloneTest - copy != rec : true
2013-05-20 23:46:58,882 0 [main] INFO JavaCloneTest - copy.getClass() == rec.getClass() : true
2013-05-20 23:46:58,882 0 [main] INFO JavaCloneTest - copy.equals(rec) : true
2013-05-20 23:46:58,882 0 [main] INFO JavaCloneTest - Original object :Rectangle [width: 45, height: 100, area: 4500]
2013-05-20 23:46:58,882 0 [main] INFO JavaCloneTest - Cloned object :Rectangle [width: 30, height: 60, area: 1800]
From output, you can clearly see that cloned object has same attribute as original object in Java. Also changing attribute of original object is not affecting state of copy object, because they only contains primitive fields, had then contain any mutable object, it would have affected both of them. You can also see that it follow standard properties of cloned object i.e. clone != original, clone.getClass() == original.getClass() and clone.equals(original).
Things to Remember - Clone method in Java
1) Clone method is used to create a copy of object in Java. In order to use clone() method, class must implement java.lang.Cloneable interface and override protected clone() method from java.lang.Object. A call to clone() method will result in CloneNotSupportedException, if that class doesn't implement Cloneable interface.
2) No constructor is called during cloning of Object in Java.
3) Default implementation of clone() method in Java provides "shallow copy" of object, because it creates copy of Object by creating new instance and then copying content by assignment, which means if your Class contains a mutable field, then both original object and clone will refer to same internal object. This can be dangerous, because any change made on that mutable field will reflect in both original and copy object. In order to avoid this, override clone() method to provide deep copy of object.
4) By convention, clone of an instance should be obtained by calling super.clone() method, this will help to preserve invariant of object created by clone() method i.e. clone != original and clone.getClass() == original.getClass(). Though these are not absolute requirement as mentioned in Javadoc.
5) Shallow copy of an instance is find, until it only contains primitives and Immutable objects, otherwise, you need to modify one or more mutable fields of object returned by super.clone, before returning it to caller.
That's all on How clone method works in Java. Now we know, what is clone and what is Cloneable interface, couple of things about clone method and what does default implementation of clone method do in Java. This information is enough to move ahead and read second part of this Java cloning tutorial, on which we will learn, how to override clone() method in Java, for classes composed with primitives, Mutable and Immutable objects in Java.
Recommended Book
Like most of important topics in Java, Joshua Bloch has shared some words of wisdom on object cloning and clone method in Java. I highly suggest going through those items on his evergreen Effective Java book.
Recommended Book
Like most of important topics in Java, Joshua Bloch has shared some words of wisdom on object cloning and clone method in Java. I highly suggest going through those items on his evergreen Effective Java book.
11 comments :
"a.clone() != a, which means original and clone are two separate object in Java heap, a.clone().getClass() == a.getClass() and clone.equals(a), which means clone is exact copy of original object"
So hashcode is different but equals should return true. Isn't this a violation of the equals-hashcode contract?
How do you relate hash code here... ?
c the hash code calculation in above rectangle class.. its has nothing to do with the reference of the object..
I have a question.. Why is clone method protected by implementation ?
Why it is protected - so that you can't call it on any object - the default shallow copy can be risky unless the class explicitly says its ok. Classes should override it, return the appropriate type (instead of Object) and make the visibility public.
HI, even hashcode is different , but equals should return true, it is not violation of equal-hashcode contract, because if two objects having same hashcode, those objects value may be same , may not be same. but when two objects having same value, both objects hashcode must be same.
please provide the links for the 2nd and 3rd part of the clone article
Return type of OVERRIDDEN clone method is Object in superclass, how can it be narrower i.e. Rectangle in subclass? Please correct me if I am wrong.
"Anonymous said...
Return type of OVERRIDDEN clone method is Object in superclass, how can it be narrower i.e. Rectangle in subclass? Please correct me if I am wrong."
--> Java overriding rule: 'The return type should be the same or a subtype of the return type declared in the original overridden method in the superclass'
Nice explanation.
this is nice explanation :)
where is 2nd and 3rd page?
Does Cloning makes use of reflection? If yes could you please explain why and how?
Thanks.
"This information is enough to move ahead and read second part of this Java cloning tutorial, on which we will learn, how to override clone() method in Java, for classes composed with primitives, Mutable and Immutable objects in Java."
Can't find second part!!! | http://javarevisited.blogspot.com/2013/09/how-clone-method-works-in-java.html?showComment=1390883197618 | CC-MAIN-2015-14 | refinedweb | 1,904 | 55.13 |
One of the big new features in perl 5.8 is that we now have real working threads available to us through the threads pragma.
However, for us module authors who already have to support our modules on different versions of perl and different platforms, we now have to deal with another case: threads! This article will show you how threads relate to modules, how we can take old modules and make them thread-safe, and round off with a new module that alters perl's behavior of the "current working directory".
To run the examples I have shown here, you need perl 5.8 RC1 or later
compiled with threads. On Unix, you can use
Configure -Duseithreads -Dusethreads; On Win32, the default build will
always have threading enabled.
How do threads relate to modules?
Threading in Perl is based on the notion of explicit shared data. That
is, only data that is explicitly requested to be shared will be shared
between threads. This is controlled by the
threads::shared pragma and
the "
: shared" attribute. Witness how it works:
use threads; my $var = 1; threads->create(sub { $var++ })->join(); print $var;
If you are accustomed to threading in most other languages, (Java/C) you would expect $var to contain a 2 and the result of this script to be "2". However since Perl does not share data between threads, $var is copied in the thread and only incremented in the thread. The original value in the main thread is not changed, so the output is "1".
However if we add in
threads::shared and a
: shared attribute we get
the desired result:
use threads; use threads::shared; my $var : shared = 1; threads->create(sub { $var++ })->join(); print $var
Now the result will be "2", since we declared $var to be a shared variable. Perl will then act on the same variable and provide automatic locking to keep the variable out of trouble.
This makes it quite a bit simpler for us module developers to make sure our modules are thread-safe. Essentially, all pure Perl modules are thread-safe because any global state data, which is usually what gives you thread-safety problems, is by default local to each thread.
Definition of thread-safe levels
To define what we mean by thread-safety, here are some terms adapted from the Solaris thread-safety levels.
- thread-safe
- This module can safely be used from multiple threads. The effect of calling into a safe module is that the results are valid even when called by multiple threads. However, thread-safe modules can still have global consequences; for example, sending or reading data from a socket affects all threads that are working with that socket. The application has the responsibility to act sane with regards to threads. If one thread creates a file with the name file.tmp then another file which tries to create it will fail; this is not the fault of the module.
- thread-friendly
- Thread-friendly modules are thread-safe modules that know about and provide special functions for working with threads or utilize threads by themselves. A typical example of this is the core
threads::queuemodule. One could also imagine a thread-friendly module with a cache to declare that cache to be shared between threads to make hits more likely and save memory.
- thread-unsafe
- This module can not safely be used from different threads; it is up to the application to synchronize access to the library and make sure it works with it the way it is specified. Typical examples here are XS modules that utilize external unsafe libraries that might only allow one thread to execute them.
Since Perl only shares when asked to, most pure Perl code probably falls
into the thread-safe category, that doesn't mean you should trust it
until you have review the source code or they have been marked with
thread-safe by the author. Typical problems include using alarm(),
mucking around with signals, working with relative paths and depending
on
%ENV. However remember that ALL XS modules that don't state
anything fall into the definitive thread-unsafe category.
Why should I bother making my module thread-safe or thread-friendly?
Well, it usually isn't much work and it will make the users of this modules that want to use it in a threaded environment very happy. What? Threaded Perl environments aren't that common you say? Wait until Apache 2.0 and mod_perl 2.0 becomes available. One big change is that Apache 2.0 can run in threaded mode and then mod_perl will have to be run in threaded mode; this can be a huge performance gain on some operating systems. So if you want your modules to work with mod_perl 2.0, taking a look at thread-safety levels is a good thing to do.
So what do I do to make my module thread-friendly?
A good example of a module that needed a little modification to work
with threads is Michael Schwern's most excellent
Test::Simple suite
(
Test::Simple,
Test::More and
Test::Builder). Surprisingly, we
had to change very little to fix it.
The problem was simply that the test numbering was not shared between threads.
For example
use threads; use Test::Simple tests => 3; ok(1); threads->create(sub { ok(1) })->join(); ok(1);
Now that will return
1..3 ok 1 ok 2 ok 2
Does it look similar to the problem we had earlier? Indeed it does, seems like somewhere there is a variable that needs to shared.
Now reading the documentation of
Test::Simple we find out that all magic
is really done inside
Test::Builder, opening up Builder.pm we quickly
find the following lines of code:
my @Test_Results = (); my @Test_Details = (); my $Curr_Test = 0;
Now we would be tempted to add
use threads::shared and
:shared
attribute.
use threads::shared; my @Test_Results : shared = (); my @Test_Details : shared = (); my $Curr_Test : shared = 0;
However
Test::Builder needs to work back to Perl 5.4.4! Attributes
were only added in 5.6.0 and the above code would be a syntax error in
earlier Perls. And even if someone were using 5.6.0,
threads::shared
would not be available for them.
The solution is to use the runtime function
share() exported by
threads::shared, but we only want to do it for 5.8.0 and when threads
have been enabled. So, let's wrap it in a
BEGIN block and an
if.
BEGIN{ if($] >= 5.008 && exists($INC{'threads.pm'})) { require threads::shared; import threads::shared qw(share); share($Curr_Test); share(@Test_Details) share(@Test_Results); }
So, if 5.8.0 or higher and threads has been loaded, we do the runtime
equivalent of
use threads::shared qw(share); and call
share() on
the variables we want to be shared.
Now lets find out some examples of where
$Curr_Test is used. We find
sub ok {} in
Test::Builder; I won't include it here, but only a
smaller version which contains:
sub ok { my($self, $test, $name) = @_; $Curr_Test++; $Test_Results[$Curr_Test-1] = 1 unless($test); }
Now, this looks like it should work right? We have shared $Curr_Test
and
@Test_Results. Of course, things aren't that easy; they never are.
Even if the variables are shared, two threads could enter
ok() at the
same time. Remember that not even the statement
$CurrTest++ is an
atomic operation, it is just a shortcut for writing
$CurrTest = $CurrTest + 1. So let's say two threads do that at the same time.
Thread 1: add 1 + $Curr_Test Thread 2: add 1 + $Curr_Test Thread 2: Assign result to $Curr_Test Thread 1: Assign result to $Curr_Test
The effect would be that $Curr_Test would only be increased by one, not two! Remember that a switch between two threads could happen at ANY time, and if you are on a multiple CPU machine they can run at exactly the same time! Never trust thread inertia.
So how do we solve it? We use the
lock() keyword.
lock() takes a shared
variable and locks it for the rest of the scope, but it is only an
advisory lock so we need to find every place that $Curr_Test is used and
modified and it is expected not to change. The
ok() becomes:
sub ok { my($self, $test, $name) = @_; lock($Curr_Test); $Curr_Test++; $Test_Results[$Curr_Test-1] = 1 unless($test); }
So are we ready? Well,
lock() was only added in Perl 5.5 so we need to
add an else to the BEGIN clause to define a lock function if we aren't
running with threads. The end result would be.
my @Test_Results = (); my @Test_Details = (); my $Curr_Test = 0; BEGIN{ if($] >= 5.008 && exists($INC{'threads.pm'})) { require threads::shared; import threads::shared qw(share); share($Curr_Test); share(@Test_Details) share(@Test_Results); } else { *lock = sub(*) {}; } } sub ok { my($self, $test, $name) = @_; lock($Curr_Test); $Curr_Test++; $Test_Results[$Curr_Test-1] = 1 unless($test); }
In fact, this is very like the code that has been added to
Test::Builder
to make it work nice with threads. The only thing not correct is
ok() as
I cut it down to what was relevant. There were roughly 5 places where
lock() had to be added. Now the test code would print
1..3 ok 1 ok 2 ok 3
which is exactly what the end user would expect. All in all this is a rather small change for this 1291 line module, we change roughly 15 lines in a non intrusive way, the documentation and testcase code makes up most of the patch. The full patch is at
Altering Perls behavior to be thread-safe, ex::threads::cwd
Somethings change when you use threads; some things that you or a module might do are not like what they used to be. Most of the changes will be due to the way your operating system treats processes that use threads. Each process has typically a set of attributes, which include the current working directory, the environment table, the signal subsystem and the pid. Since threads are multiple paths of execution inside a single process, the operating system treats it as a single process and you have a single set of these attributes.
Yep. That's right - if you change the current working directory in one thread, it will also change in all the other threads! Whoops, better start using absolute paths everywhere, and all the code that uses your module might use relative paths. Aaargh...
Don't worry, this is a solvable problem. In fact, it's solvable by a module.
Perl allows us to override functions using the
CORE::GLOBAL namespace.
This will let us override the functions that deal with paths and set the
cwd correctly before issuing the command. So let's start off
package ex::threads::safecwd; use 5.008; use strict; use warnings; use threads::shared; our $VERSION = '0.01';
Nothing weird here right? Now, when changing and dealing with the
current working directory one often uses the
Cwd module, so let us make
the cwd module safe first. How do we do that?
1) use Cwd; 2) our $cwd = cwd; #our per thread cwd, init on startup from cwd 3) our $cwd_mutex : shared; # the variable we use to sync 4) our $Cwd_cwd = \&Cwd::cwd; 5) *Cwd::cwd = *my_cwd; sub my_cwd { 6) lock($cwd_mutex); 7) CORE::chdir($cwd); 8) $Cwd_cwd->(@_); }
What's going on here? Let's analyze it line by line:
- We include
Cwd.
- We declare a variable and assign to it the cwd we start in. This variable will not be shared between threads and will contain the cwd of this thread.
- We declare a variable we will be using to lock for synchronizing work.
- Here we take a reference to the
&Cwd::cwdand store in
$Cwd_cwd.
- Now we hijack
Cwd::cwdand assign to it our own
my_cwdso whenever someone calls
Cwd::cwd, it will call
my_cwdinstead.
my_cwdstarts of by locking $cwd_mutex so no one else will muck. around with the cwd.
- After that we call
CORE::chdir()to actually set the cwd to what this thread is expecting it to be.
- And we round off by calling the original
Cwd::cwdthat we stored in step 4 with any parameters that we were handed to us.
In effect we have hijacked
Cwd::cwd and wrapped it around with a lock
and a
chdir so it will report the correct thing!
Now that
cwd() is fixed, we need a way to actually change the
directory. To do this, we install our own global
chdir, simply like
this.
*CORE::GLOBAL::chdir = sub { lock($cwd_mutex); CORE::chdir($_[0]) || return undef; $cwd = $Cwd_cwd->(); };
Now, whenever someone calls
chdir() our
chdir will be called
instead, and in it we start by locking the variable controlling access,
then we try to chdir to the directory to see if it is possible,
otherwise we do what the real chdir would do, return undef. If it
succeeds, we assign the new value to our per thread
$cwd by calling the
original
Cwd::cwd()
The above code is actually enough to allow the following to work:
use threads use ex::threads::safecwd; use Cwd; chdir("/tmp"); threads->create(sub { chdir("/usr") } )->join(); print cwd() eq '/tmp' ? "ok" : "nok";
Since the
chdir("/usr"); inside the thread will not affect the other
thread's
$cwd variable, so when
cwd is called, we will lock down the
thread,
chdir() to the location the thread
$cwd contains and perform a
cwd().
While this is useful, we need to get along and provide some more functions to extend the functionality of this module.
*CORE::GLOBAL::mkdir = sub { lock($cwd_mutex); CORE::chdir($cwd); if(@_ > 1) { CORE::mkdir($_[0], $_[1]); } else { CORE::mkdir($_[0]); } }; *CORE::GLOBAL::rmdir = sub { lock($cwd_mutex); CORE::chdir($cwd); CORE::rmdir($_[0]); };
The above snippet does essentially the same thing for both
mkdir and
rmdir. We lock the $cwd_mutex to synchronize access, then we
chdir to
$cwd and finally perform the action. Worth noticing here is the check we
need to do for
mkdir to be sure the prototype behavior for it is
correct.
Let's move on with
opendir,
open,
readlink,
readpipe,
require,
rmdir,
stat,
symlink,
system and
unlink. None
of these are really any different from the above with the big exception
of
open.
open has a weird bit of special case since it can take
both a HANDLE and an empty scalar for autovification of an anonymous
handle.
*CORE::GLOBAL::open = sub (*;$@) { lock($cwd_mutex); CORE::chdir($cwd); if(defined($_[0])) { use Symbol qw(); my $handle = Symbol::qualify($_[0],(caller)[0]); no strict 'refs'; if(@_ == 1) { return CORE::open($handle); } elsif(@_ == 2) { return CORE::open($handle, $_[1]); } else { return CORE::open($handle, $_[1], @_[2..$#_]); } }
Starting off with the usual lock and
chdir() we then need to check if
the first value is defined. If it is, we have to qualify it to the
callers namespace. This is what would happen if a user does
open FOO, "+>foo.txt". If the user instead does
open main::FOO, "+>foo.txt",
then Symbol::qualify notices that the handle is already qualified and
returns it unmodified. Now since
$_[0] is a readonly alias we cannot
assign it over so we need to create a temporary variable and then
proceed as usual.
Now if the user used the new style
open my $foo, "+>foo.txt", we need
to treat it differently. The following code will do the trick and
complete the function.
else { if(@_ == 1) { return CORE::open($_[0]); } elsif(@_ == 2) { return CORE::open($_[0], $_[1]); } else { return CORE::open($_[0], $_[1], @_[2..$#_]); } } };
Wonder why we couldn't just assign
$_[0] to
$handle and unify the code
path? You see,
$_[0] is an alias to the
$foo in
open my $foo, "+>foo.txt" so
CORE::open will correctly work.
However, if we do
$handle = $_[0] we take a copy of the undefined
variable and
CORE::open won't do what I mean.
So now we have a module that allows the you to safely use relative paths in most of the cases and vastly improves your ability to port code to a threaded environment. The price we pay for this is speed, since every time you do an operation involving a directory you are serializing your program. Typically, you never do those kinds of operations in a hot path anyway. You might do work on your file in a hot path, but as soon as we have gotten the filehandle no more locking is done.
A couple of problems remain. Performance-wise, there is one big problem
with
system(), since we don't get control back until the
CORE::system() returns, so all path operations will hang waiting for
that. To solve that we would need to revert to XS and do some magic with
regard to the system call. We also haven't been able to override the
file test operators (
-x and friends), nor can we do anything about
qx {}. Solving that problem requires working up and down the optree
using
B::Generate and
B::Utils. Perhaps a future version of the
module will attempt that together with a custom op to do the locking.
Conclusion
Threads in Perl are simple and straight forward, as long as we stay in pure Perl land everything behaves just about how we would expect it to. Converting your modules should be a simple matter of programming without any big wizardly things to be done. The important thing to remember is to think about how your module could possibly take advantage of threads to make it easier to use for the programmer.
Moving over to XS land is altogether different; stay put for the next article that will take us through the pitfalls of converting various kinds of XS modules to thread-safe and thread-friendly levels. | http://www.perl.com/pub/2002/06/11/threads.html | CC-MAIN-2014-52 | refinedweb | 2,996 | 70.33 |
Introduction To SharePoint Interview Questions And Answers
SharePoint provides a protractible platform that has a dimension of products that provide a business solution to organizations as per various needs. Let us have a look at different SharePoint Interview questions which may be asked when you attempt for an interview..
Here are top ten most asked 2019 Share:
•SharePoint Sites is useful for creating websites.
•Insights act as a tool which brings all information together from different data sources.
•SharePoint Communicates helps in networking and collaborating with different people.
•Search helps in providing efficient and quick information and contents about an enterprise.
•Contents act as a perfect Content Management System.
•Lastly, Composition assists in using different tools and capabilities together.
2.What is the latest version of SharePoint and explain its main features in brief?
Answer:
The latest version of SharePoint is SharePoint 2013. The main features of it are that it has older version which were having performance issues but now have been improved like distributed Cache service, minimal download strategy, and the shredded storage.
4.5 (888 ratings)
₹19999
View Course
3.Explain the terms: Site template, Site definition, and ONET.xml.
Answer:
Site template helps in providing a basic template and layout of a new site which is to be created in SharePoint. The design information contains information about a site which includes:
•List which is to be a part of the site
•Site content like document libraries
•The themes and borders which is to be used on the site
•Web part pages which will be used in the site
In addition to it, it allows other applications of SharePoint to be instantiated whenever required.
Site definition mainly is a collection of XML or ASPX files which element are taken by SharePoint Handler object and they are loaded in environment properly. The file, in general, contains assembly name, namespace, public key token numeric, type name and safe declaration. The items that are not loaded in environment properly will throw an error.
6.What are SPSite and SPWeb? Explain the difference between them.
Answer:
SPSite is a site collection and is represented in an object model. It is the object where we start working with server object model. It is most frequently used in SharePoint application development.
SPWeb, on the other hand, is a site under site collection in SharePoint. It is referred to as SPWeb class which SharePoint list.
8.What is GAC in SharePoint?
Answer:
The Global Assembly Cache contains assembly code or machine code which is used to run a program. It gives custom binaries place into full trust code group. The binaries are deployed to be used in between the sender and the receiver. After signing, the binary will have a public key identifier for itself so that it can be used by sender and receiver. GAC can be used with .NET assemblies cache for command line platform.
9.Explain the concept of Content type in SharePoint.
Answer:
SharePoint has a facility of Contents, hence Content type is referred to a reusable collection of settings and metadata to represent a particular content. For example, an employee content type may have a set of metadata like employee_id, employee_name, salary, etc. It helps organize the content in a more meaningful and organized way. It also supports inheritance of all properties and appearances.
10.What is a Theme?
Answer:
Themes are a tool to customize a site as per the user needs. It applies a lightweight branding by changing overall site layout, colors, background, headers, etc. The latest version has an expandable theming engine with many new facilities which makes customization easier. It provides creation os font schemes and color palettes. These user-specific themes can be added to theme gallery and be saved.
Recommended Article
This has been a guide to List Of SharePoint Interview Questions and Answers so that the candidate can crackdown these SharePoint Interview Questions easily. You may also look at the following articles to learn more –
- Top 5 Most Valuable Data Science Interview Questions
- Most Valuable Credit Analyst Interview Questions
- Ways to Increase Your Website Traffic
- Know The Top 5 Useful DBA Interview Questions And Answer
- Most valuable Magento Interview Questions
Software Development Course - All in One Bundle
600+ Online Courses
3000+ Hours
Verifiable Certificates
Lifetime Access | https://www.educba.com/sharepoint-interview-questions/ | CC-MAIN-2019-22 | refinedweb | 714 | 55.54 |
This module provides memory mapping facilities that allow a user to map files into the virtual address space of the process. There are various options that can be used when mapping a file into memory, such as copy on write. Not all of these options are available on all platforms, hymmap_capabilities provides the list of supported options. Note also that on some platforms memory mapping facilites do not exist at all. On these platforms the API will still be available, but will simply read the file into allocated memory.
#include <windows.h>
#include "hyport.h"
Check the capabilities available for HYMMAP at runtime for the current platform.
Map a file into memory.
PortLibrary shutdown.
PortLibrary startup.
This function is called during startup of the portLibrary. Any resources that are required for the memory mapping operations may be created here. All resources created here should be destroyed in hymmap_shutdown.
UnMap previously mapped memory.
Genereated on Tue Dec 9 14:12:59 2008 by Doxygen.
(c) Copyright 2005, 2008 The Apache Software Foundation or its licensors, as applicable. | http://harmony.apache.org/externals/vm_doc/html/hymmap_8c.html | CC-MAIN-2014-15 | refinedweb | 176 | 58.99 |
On Mon, Oct 24, 2016 at 10:50 AM, Ryan Birmingham [email protected] wrote:
I also believe that using a text file would not be the best solution; using a dictionary,
actually, now that you mention it -- .translate() already takes a dict, so if youw ant to put your translation table in a text file, you can use a dict literal to do it:
# contents of file:
{ 32: 95,
105: 64,
115: 36, }
then use it:
s.translate(ast.literal_eval(open("trans_table.txt").read())) now all you need is a tiny little utility function:
def translate_from_file(s, filename): return s.translate(ast.literal_eval(open(filename).read()))
:-)
-Chris
other data structure, or anonomyous function would make more sense than having a specially formatted file.
On Oct 24, 2016 13:45, "Chris Barker" [email protected] wrote:
my thought on this:
If you need translate() you probably can write the code to parse a text file, and then you can use whatever format you want.
This seems a very special case to build into the stdlib.
-CHB
On Mon, Oct 24, 2016 at 10:39 AM, Mikhail V [email protected] wrote:
Hello all,
I would be happy to see a somewhat more general and user friendly version of string.translate function. It could work this way: string.newtranslate(file_with_table, Drop=True, Dec=True)
So the parameters:
- "file_with_table" : a text file with table in following format:
#[In] [Out]
97 {65} 98 {66} 99 {67} 100 {} ... 110 {110}
Notes: All values are decimal or hex (to switch between parsing format use Dec parameter) As it turned out from my last discussion, majority prefers hex notation, so I am not in mainstream with my decimal notation here, but both should be supported. Empty [Out] value {} means that the character will be deleted.
- "Drop = True" this will set the default behavior for those values
which are NOT in the table.
For Drop = True: all values not defined in table set to [out] = {}, and be deleted.
For Drop=False: all values not defined in table set [out] = [in], so those remain as is.
- Dec= True : parsing format Decimal/hex. I use decimal everywhere.
Further thoughts: for 8-bit strings this should be simple to implement I think. For 16-bit of course there is issue of memory usage for lookup tables, but the gurus could probably optimise it. E.g. at the parsing stage it is not necessary to build the lookup table for whole 16-bit range of course, but take only values till the largest ordinal present in the table file.
About the format of table file: I suppose many users would want also to define characters directly, I am not sure if it is really needed, but if so, additional brackets or escape char could be used, like this for example:
a {A} \98 {\66} \99 {\67}
but as said I don't like very much the idea and would be OK for me to use numeric values only.
So approximately I see it. Feel free to share thoughts or criticise.
Mikhail _______________________________________________ Python-ideas mailing list [email protected] Code of Conduct:
--
Python-ideas mailing list [email protected] Code of Conduct: | https://mail.python.org/archives/list/[email protected]/message/OHVBEZ4KNFPHU74BQZPGWN23GWAY6R5O/ | CC-MAIN-2021-21 | refinedweb | 534 | 70.63 |
This appendix contains the following topics:9i SQL Reference and the PL/SQL User's Guide and Reference.
Pro*C/C++ keywords, like C or C++ keywords, should not be used as variables in your program. Otherwise, an error will be generated. An error may result if they are used as the name of a database object such as a column. Here are the keywords used in Pro*C/C++:
The following table the table is not a comprehensive list of all functions within the Oracle reserved namespaces. For a complete list of functions within a particular namespace, refer to the document that corresponds to the appropriate Oracle library. | http://docs.oracle.com/cd/B10501_01/appdev.920/a97269/pc_abres.htm | CC-MAIN-2016-26 | refinedweb | 109 | 54.22 |
RDFLib is a pure Python package for working with RDF. RDFLib contains most things you need to work with RDF, including:
The RDFlib community maintains many RDF-related Python code repositories with different purposes. For example:
Please see the list for all packages/repositories here:
5.x.ysupports Python 2.7 and 3.4+ and is mostly backwards compatible with 4.2.2. Only bug fixes will be applied.
6.x.yis the next major release which will support Python 3.6+. (Current master branch)
RDFLib may be installed with Python's package management tool pip:
$ pip install rdflib
Alternatively manually download the package from the Python Package Index (PyPI) at
The current version of RDFLib is 5.0.0, see the
CHANGELOG.md
file for what's new in this release.
RDFLib aims to be a pythonic RDF API. RDFLib's main data object is a
Graph which is a Python collection
of RDF Subject, Predicate, Object Triples:
To create graph and load it with RDF data from DBPedia then print the:
from rdflib.namespace import DC, DCTERMS, DOAP, FOAF, SKOS, OWL, RDF, RDFS, VOID, XMLNS, XSD
You can use them like this:).
Or like this, adding a triple to a graph
g:
g.add(( rdflib.URIRef(""), FOAF.givenName, rdflib.Literal("Nick", datatype=XSD.string) ))
The triple (in n-triples notation)
<> <> "Nick"^^<> .
is created where the property
FOAF.giveName is the URI
<> and
XSD.string is the
URI
<>.
You can bind namespaces to prefixes to shorten the URIs for RDF/XML, Turtle, N3, TriG, TriX & JSON-LD serializations:
g.bind("foaf", FOAF) g.bind("xsd", XSD)
This will allow the n-triples triple above to be serialised like this:
print(g.serialize(format="turtle").decode("utf-8"))
With these results:
PREFIX foaf: <> PREFIX xsd: <> <> foaf:givenName "Nick"^^xsd:string .
New Namespaces can also be defined:
dbpedia = rdflib.Namespace('') abstracts = list(x for x in g.objects(semweb, dbpedia['abstract']) if x.language=='en')
See also ./examples
The library contains parsers and serializers for RDF/XML, N3, NTriples, N-Quads, Turtle, TriX, RDFa and Microdata. JSON-LD parsing/serializing can be achieved using the JSON-LD plugin.
Multiple other projects are contained within the RDFlib "family", see.
See for our documentation built from the code.
For general "how do I..." queries, please use and tag your question with
rdflib.
Existing questions:
See for the release schedule.
RDFLib survives and grows via user contributions! Please read our contributing guide to get started. Please consider lodging Pull Requests here:
You can also raise issues here:
If you want to contact the rdflib maintainers, please do so via the rdflib-dev mailing list: | https://awesomeopensource.com/project/RDFLib/rdflib | CC-MAIN-2020-50 | refinedweb | 440 | 61.02 |
>>)
Re:enumerators (Score:4, Informative)
The type-safe enum pattern shows the correct way of handling enumerations. And you can the Jakarta Commons Lang library [apache.org] to make it a bit easier.
Re:enumerators (Score:4, Funny):enumerators (Score:5, Interesting)
Evil thought: you could get relatively nice-looking static instances with methods if you combined enums with anonymous inner classes...
Re:enumerators (Score:4, Insightful)
I would think they'd have to be like singletons, with the compiler creating exactly one instance of each enum value.
Re:enumerators (Score:4, Informative)
public class Season {
static public final Season spring = new Season();
static public final Season summer = new Season();
static public final Season fall = new Season();
static public final Season winter = new Season();
private Season() { }
};
Except it's only one line, there are useful additional methods (like a toString), and you can use it in a switch statement.
Re:enumerators (Score.
Generics (Score:3, Interesting)
Re:Generics (Score:5, Interesting)
Re:Generics (Score:3, Insightful)
Re:Generics (Score:5, Informative)
Re:Generics (Score:3, Insightful)
For some interesting words on generics, and implementation for both Java and M$ CLR take a peak at this link: r l=
All you "I hate M$" people out there will more than likely me
Re:Generics (Score:3, Informative)
The method compareTo is supposed to override the method in Comparable, which takes an object. So they create a bridge method that overrides it normally: cl
Re:Generics (Score:3, Insightful)
By comparison, by telling the compiler that you want all of the objects in some container to be
Programming shortcuts (Score:5, Funny)
I agree. This is why I never create my own functions or methods. Evey program should be just one big function.
Re:Programming shortcuts (Score:5, Insightful)
Oh, do I agree, in boldface.:Programming shortcuts (Score:4, Insightful)
Re:Programming shortcuts (Score:3, Funny)
Maybe it's just me, but I think operator overloading is closer to syntactic poison than syntactic sugar:Programming shortcuts (Score:4, Interesting)
Every language has idioms, and a programmer should use those idioms in preference to other allowable ways to do things unless they have a good reason. It's all just part of good style.
Agreed.. (Score:5, Insightful).
Yeah, but... (Score:5, Funny).
So basically C# minus generics (Score:3, Insightful)
Where are true properties though?.
Re:Wow, how intelligent (Score:4, Funny)
I love the language. It's the lack of libraries that kills you.
programming shorthand (Score:5, Funny)
My code was hard to write to it should be hard to read.
Re:programming shorthand (Score:3, Insightful).
Uglification? (Score:5, Insightful)
It appears that Java's way to solve run time errors is to screw the bolts as tight a possible during compile time. Will generics become THE way, or just remain one of the options?
Re:Uglification? (Score:3, Insightful)
But for that other 10% of the time, remember that all classes are children of Object, so I'm betting you c
Re:Uglification? (Score:5, Informative)
Well, generics remind me of C++ templates
They're not quite the same; C++ templates are essentially glorified preprocessor macros with
some relatively small checking and a rather baroque
underlying functional language. Generics are more
concrete than that.
Not to mention that attached to variable name doesn't make code any more attractive to look at.
It would be really neat to have type inference there
It appears that Java's way to solve run time errors is to screw the bolts as tight a possible during compile time.
That's the idea, and that's also what I try to do when writing programs. Why should I have to write half a dozen test suites for some simple program property if the type checker can tell me whether it'll work right?
Remember: Compilers don't do type checking just to optimise, but also to catch programming errors. And Generics allow you to catch a much more interesting class of these.
-- Christoph
Retro... (Score:3, Interesting)
One line summary (Score:5, Funny):not just sugar (Score:3, Informative)
You really need to try a generating/refactoring IDE like Eclipse [eclipse.org]. I once held to the orthodoxy that if I needed more than emacs then something was broken in the language. I grew up on object systems like CLOS where if you wanted a getter or setter you just asked for it in the definition of a field. So at first C++'s lack of public read-only/private read-write vars ann <= 1
Re:Shorthand programming (Score:3, Interesting)
Interestingly, ISO C defines shorthands for people with crap keyboards. You can type <% instead of { and %> instead of }. Also:
:> means ]
<: means [
%: means #
%:%: means ##
You get even more if you include iso646.h. Of course, the old compiler you were using may not have had thes
Re:Shorthand programming (Score:3, Insightful)
I know I can change this to eliminate the empty ';', but I choose not to, because I feel that the conditional reads better as a positive statement.
At the cost of failing to compile.
Enumeration classes (Score:3, Informative)
final class Color {
String c;
private Color (String color) { c = color; }
String toString() { return c; }
static final Color RED = new Color("red");
static final Color BLUE = new Color("blue");
static final Color GREEN = new Color("green");
}
You can then treat this class like a type-safe enumeration. It doesn't have all of the nifty features that you'll see in languages like Ada, but it has the nice property of allowing you to attach whatever information you want to the enumeration class.
You can also use this approach to create self-initializing classes, e.g., a list of states (including full name, postal abbreviations, shipping zone, etc.) from a database. You can access the enumerated values through a collection class, a "lookup" method, or even reflection.
Uh, read the article (Score:4, Informative):Article didn't mention new concurrency stuff (Score:3, Funny)
Sounds like a "super" class to me.
Re:Article didn't mention new concurrency stuff (Score:3, Funny)
Sounds like a "super" class to me.
Sounds like a porn star's name to me.
JCP strikes again (Score:5, Informative)
Re:Article didn't mention new concurrency stuff (Score:4, Informative)
How this compares to C++ (Score:5, Informative)
The new Java generics are really weak compared to C++ template support. This is probably partially due to difficult in compiler support and also complexity (templates are without a doubt the most complex feature of C++). I was disappointed though in Java generics mainly due to lack of any kind of specialization support and also about the strange paradigm used for Iterators (instead of an iterator being class defined with a consistant interface, it's an external class that just behaves that must wrap a behavior around the class).
Enhanced for loop
This is for_each() in C++. Now, with for_each, you have to use function objects which is arguable as to whether it's more readable. Fortunately, Boost has developed a boost::lambda class that allows for code to be used as the third parameter. This is _really_ neat.
Autoboxing/unboxing
I presume this means that primatives can't be used in generics.. That's kind of sad. This isn't a problem in C++.
Typesafe enums
This would be a nice linguistic pattern to have in C++. As it stands, the equivalent would be:
struct Coin { enum { penny, nickel, dime, quarter }; };
Static import
This can be achieved via using in C++. Of course, Java doesn't even really have a namespace paradigm so it's not really a fair comparision.
Metadata
This is.. well.. strange. I didn't see the syntax for doing something like this. If it is just keyword/code replacing, then an #include directive would work just as well.
Re:How this compares to C++ (Score:5, Informative)
Metadata
This is.. well.. strange. I didn't see the syntax for doing something like this. If it is just keyword/code replacing, then an #include directive would work just as well.
IMO, metadata is the coolest thing. It's a feature of C# which has had little recognition despite its coolness.
In both Java and C# you can use reflection to find out information about a class (class name, method names, etc). Attributes/metadata allow you to attach information to just about every element of a class/struct so that it can be queried dynamically using the reflection apis.
Imagine them as JavaDoc tags that aren't discarded at compile time but are instead compiled into a class's meta data. They'll do for source code what XML did for HTML -- give more meaning to the code.
Here's an example of using attributes/metadata to simplify XML serialization: The C# XmlSerializer class dynamically generates the IL that will do the serialization so it is *very fast*. It knows how to map the field names to element/attribute names by inspecting the attributes.
Some other obvious uses include object/relational mapping (no need for external XML mapping files) and XMLRPC (just mark a method as Remotable!) etc. You can imagine infinite other uses for attributes/metadata.
I'm not sure how it works in Java but in C#, attributes are simply classes (usually with a name ending in 'Attribute'). You can define your own custom attributes and your own classes that work with them. It's very cool.
Re:How this compares to C++ (Score:3, Informative)
struct Coin { enum { penny, nickel, dime, quarter }; };
Not equivalent, the Java version also supports writing as a String
System.out.println("coin: " + Coin.PENNY);
What Bjarne Stroustrup has to say about Java (Score:5, Informative)
This is what he said about Java [att.com] and the [att.com] likes [att.com].
Also here [eptacom.net]..
This is a sad day for me. (Score:3, Interesting)
To the many cooks who spoiled the broth: "Thanks a lot, assholes."
I think these are all great... (Score:4, Insightful)
for (TimerTask task : c)
task.cancel();
becomes
foreach (TimerTask task in c)
task.cancel();
Re:I think these are all great... (Score:4, Informative)
Re:I think these are all great... (Score:3, Funny)
Programming Myth (Score:3)
False.
Shorter code and more powerful idioms makes a program *easier* to debug.
Too much code makes the purpose lost in the noise. Consider:
foreach (@array) {foo $_};
and
my $i;
for ($i = 0; $i < $#array; $i++)
{
foo $array[$i];
}
(I use Perl here since it is well known and I can show the 2 idioms, but really Perl isn't the clearest language of all). One form is much clearer and reveal much more eloquently what is the intent of the code.
Of course, this doesn't mean that shorter code "at all cost" is easier to debug, it means that "less code" is better.
Oh, and also it pisses me off when management tells me I need to use only certains idioms in case someone comes over my code and don't understand it. I hate to have to program like the next maintainer will be clueless and ignorant.:Looking to Get Back into Java (Score:5, Informative)
Re:Looking to Get Back into Java (Score:4, Informative)
My roommate told me about it, and once I started using it I never looked back.
Re:Looking to Get Back into Java (Score:4, Informative)
Re:Looking to Get Back into Java (Score:4, Informative)
;-)
what more do you need?
If you want a *real* IDE, I'd check out IntelliJ's Idea [intellij.com] product. It's a few hundred $$$. Lots of folks like Netbeans [netbeans.org] and IBM's Eclipse as well (sorry, no url to eclipse, but I'm sure you can find it). The latter 2 are opensource.
Netbeans (Score:3, Informative)
Re:Looking to Get Back into Java (Score:3, Funny)
Harsh.
Very cruel man.
Re:Write once, Rewrite forever? (Score:5, Informative)
Re:Write once, Rewrite forever? (Score:3, Informative)
Re:Write once, Rewrite forever? (Score:3, Insightful)
What part of the new syntax would cause old code to break? s p
Re:Everything must change... (Score:3, Insightful)
Now, if you want a weakly/dynamically/non-typed language, you should use one, and deal with the inevitable tradeoffs. It's not like there's a shortage of non-strongly-typed languages out
Re:Where is operator overloading? (Score:3, Interesting)
Re:Where is operator overloading? (Score:3, Informative)
There was a preprocessor named jpp floating around a few years ago that supported operator overloading for Java. It seems to have vanished off the net in the meantime, though I'm sure I have a copy somewhere. True operator overloading is supposed to be added to Java at some:ooooh baby (Score:5, Funny)
You don't get out much, do you?
Re:foreach (Score:3, Insightful) | http://developers.slashdot.org/story/03/05/09/1514223/summary-of-jdk15-language-changes?sbsrc=thisday | CC-MAIN-2015-48 | refinedweb | 2,161 | 63.19 |
schedule a meeting for 11pm EST to discuss the PyYaml approach to
putting dates into data files.
Here are my customers:
1) Mike uses the PyYaml library for a calendar application, and he wants to have
certain unquoted strings get through the parser without choking. He seems to be
happy if YAML just gives him the date/time-ish looking fields at strings, and he
will just do his own logic to make them into date objects.
2) The YAML standards body, mostly represented by Oren and Brian on this issue,
wants PyYaml to conform to the YAML spec, mostly to ensure that YAML becomes a
solution for language interoperability.
3) Clark uses the PyYaml library for a scheduling application, and he wants to
have certain unquoted strings automatically be converted to mxDateTime objects
in Python..
Suppose you have this document:
Project:
Start date: 2002-09-15
End date: 2002-10-04
From YAML's perspective, all the datatypes are strings. You might run this YAML
through 3 different programs:
SlideShowell in Python -- For putting this YAML data into a slideshow
presentation, the thingies are just strings.
YOD in Ruby -- Again, we would just want to treat the dates as strings. The
semantic content of the data just doesn't matter.
Clark's project management software -- Clark would want to upgrade the thingies
from strings to mxDateTime values, because he wants to do fancy date arithmetic,
etc.
This might be one way he does it:
def convertDatesToMxDateTime(str):
if re.match("\d\d\d\d-\d\d-\d\d", str):
return asMxDateTime(str)
return str
def getExtendedLoader(data):
parser = yaml.Parser(clarksData)
parser.setScalarHook(convertDatesToMxDateTime)
return parser
parser = getExtendedLoader(clarksData)
for doc in parser.load():
# do whatever
Basically, we keep YAML simple, but we allow Clark a simple way to extend PyYaml
to support his own implicit types. This doesn't violate YAML interoperability,
though, because the data is still treated as strings at the YAML layer, and all
other YAML parsers--even those without the scalar conversion hooks--will parse
his files just fine. Strings make a great lowest common denominator data type.
What do you guys think? Let me know about the IRC time. I'm also around
earlier in the day, but we have those nagging time zone issues to worry about.
;)
People:
Steve:
wakes up: 8:30am
goes to bed: 1:00pm
where: DC
Ingy:
wakes up: 11:00am
goes to bed: 4:00am
where: Portland
Clark
wakes up: 5:30am
goes back to bed: 07:30:00.000am
activity: swimming
Cheers,
Steve
View entire thread | https://sourceforge.net/p/yaml/mailman/message/8986472/ | CC-MAIN-2017-39 | refinedweb | 430 | 61.06 |
In this tutorial, you’ll learn the four main approaches to string formatting in Python, as well as their strengths and weaknesses. You’ll also get a simple rule of thumb for how to pick the best general purpose string formatting approach in your own programs.
Remember the Zen of Python and how there should be “one obvious way to do something in Python”? You might scratch your head when you find out that there are four major ways to do string formatting in Python.
Table of Contents
Let’s jump right in, as we’ve got a lot to cover. In order to have a simple toy example for experimentation, let’s assume you’ve got the following variables (or constants, really) to work with:
>>> errno = 50159747054 >>> name = 'Bob'
Based on these variables, you’d like to generate an output string containing a simple error message:
'Hey Bob, there is a 0xbadc0ffee error!'
That error could really spoil a dev’s Monday morning… But we’re here to discuss string formatting. So let’s get to work.#1 “Old Style” String Formatting (% Operator)
Strings in Python have a unique built-in operation that can be accessed with the % operator. This lets you do simple positional formatting very easily. If you’ve ever worked with a printf-style function in C, you’ll recognize how this works instantly. Here’s a simple example:
>>> 'Hello, %s' % name "Hello, Bob"
I’m using the %s format specifier here to tell Python where to substitute the value of name, represented as a string.
There are other format specifiers available that let you control the output format. For example, it’s possible to convert numbers to hexadecimal notation or add whitespace padding to generate nicely formatted tables and reports. (See Python Docs: “printf-style String Formatting”.)
Here, you can use the %x format specifier to convert an int value to a string and to represent it as a hexadecimal number:
>>> '%x' % errno 'badc0ffee'
The “old style” string formatting syntax changes slightly if you want to make multiple substitutions in a single string. Because the % operator takes only one argument, you need to wrap the right-hand side in a tuple, like so:
>>> 'Hey %s, there is a 0x%x error!' % (name, errno) 'Hey Bob, there is a 0xbadc0ffee error!'
It’s also possible to refer to variable substitutions by name in your format string, if you pass a mapping to the % operator:
>>> 'Hey %(name)s, there is a 0x%(errno)x error!' % { ... "name": name, "errno": errno } 'Hey Bob, there is a 0xbadc0ffee error!'
This makes your format strings easier to maintain and easier to modify in the future. You don’t have to worry about making sure the order you’re passing in the values matches up with the order in which the values are referenced in the format string. Of course, the downside is that this technique requires a little more typing.
I’m sure you’ve been wondering why this printf-style formatting is called “old style” string formatting. It was technically superseded by “new style” formatting in Python 3, which we’re going to talk about next.#2 “New Style” String Formatting (str.format)
Python 3 introduced a new way to do string formatting that was also later back-ported to Python 2.7. This “new style” string formatting gets rid of the %-operator special syntax and makes the syntax for string formatting more regular. Formatting is now handled by calling .format() on a string object.
You can use format() to do simple positional formatting, just like you could with “old style” formatting:
>>> 'Hello, {}'.format(name) 'Hello, Bob'
Or, you can refer to your variable substitutions by name and use them in any order you want. This is quite a powerful feature as it allows for re-arranging the order of display without changing the arguments passed to format():
>>> 'Hey {name}, there is a 0x{errno:x} error!'.format( ... name=name, errno=errno) 'Hey Bob, there is a 0xbadc0ffee error!'
This also shows that the syntax to format an int variable as a hexadecimal string has changed. Now you need to pass a format spec by adding a :x suffix. The format string syntax has become more powerful without complicating the simpler use cases. It pays off to read up on this string formatting mini-language in the Python documentation.
In Python 3, this “new style” string formatting is to be preferred over %-style formatting. While “old style” formatting has been de-emphasized, it has not been deprecated. It is still supported in the latest versions of Python. According to this discussion on the Python dev email list and this issue on the Python dev bug tracker, %-formatting is going to stick around for a long time to come.
Still, the official Python 3 documentation doesn’t exactly recommend “old style” formatting or speak too fondly of it:
)
This is why I’d personally try to stick with str.format for new code moving forward. Starting with Python 3.6, there’s yet another way to format your strings. I’ll tell you all about it in the next section.#3 String Interpolation / f-Strings (Python 3.6+)
Python 3.6 added a new string formatting approach called formatted string literals or “f-strings”. This new way of formatting strings lets you use embedded Python expressions inside string constants. Here’s a simple example to give you a feel for the feature:
>>> f'Hello, {name}!' 'Hello, Bob!'
As you can see, this prefixes the string constant with the letter “f“—hence the name “f-strings.” This new formatting syntax is powerful. Because you can embed arbitrary Python expressions, you can even do inline arithmetic with it. Check out this example:
>>> a = 5 >>> b = 10 >>> f'Five plus ten is {a + b} and not {2 * (a + b)}.' 'Five plus ten is 15 and not 30.'
Formatted string literals are a Python parser feature that converts f-strings into a series of string constants and expressions. They then get joined up to build the final string.
Imagine you had the following greet() function that contains an f-string:
>>> def greet(name, question): ... return f"Hello, {name}! How's it {question}?" ... >>> greet('Bob', 'going') "Hello, Bob! How's it going?"
When you disassemble the function and inspect what’s going on behind the scenes, you’ll see that the f-string in the function gets transformed into something similar to the following:
>>> def greet(name, question): ... return "Hello, " + name + "! How's it " + question + "?"
The real implementation is slightly faster than that because it uses the BUILD_STRING opcode as an optimization. But functionally they’re the same:
>>> import dis >>> dis.dis(greet) 2 0 LOAD_CONST 1 ('Hello, ') 2 LOAD_FAST 0 (name) 4 FORMAT_VALUE 0 6 LOAD_CONST 2 ("! How's it ") 8 LOAD_FAST 1 (question) 10 FORMAT_VALUE 0 12 LOAD_CONST 3 ('?') 14 BUILD_STRING 5 16 RETURN_VALUE
String literals also support the existing format string syntax of the str.format() method. That allows you to solve the same formatting problems we’ve discussed in the previous two sections:
>>> f"Hey {name}, there's a {errno:#x} error!" "Hey Bob, there's a 0xbadc0ffee error!"
Python’s new formatted string literals are similar to JavaScript’s Template Literals added in ES2015. I think they’re quite a nice addition to Python, and I’ve already started using them in my day to day (Python 3) work.#4 Template Strings (Standard Library)
Here’s one more tool for string formatting in Python: template strings. It’s a simpler and less powerful mechanism, but in some cases this might be exactly what you’re looking for.
Let’s take a look at a simple greeting example:
>>> from string import Template >>> t = Template('Hey, $name!') >>> t.substitute(name=name) 'Hey, Bob!'
You see here that we need to import the Template class from Python’s built-in string module. Template strings are not a core language feature but they’re supplied by the string module in the standard library.
Another difference is that template strings don’t allow format specifiers. So in order to get the previous error string example to work, you’ll need to manually transform the int error number into a hex-string:
>>>>> Template(templ_string).substitute( ... name=name, error=hex(errno)) 'Hey Bob, there is a 0xbadc0ffee error!'
That worked great.
So when should you use template strings in your Python programs? In my opinion, the best time to use template strings is when you’re handling formatted strings generated by users of your program. Due to their reduced complexity, template strings are a safer choice.
The more complex formatting mini-languages of the other string formatting techniques might introduce security vulnerabilities to your programs. For example, it’s possible for format strings to access arbitrary variables in your program.
That means, if a malicious user can supply a format string, they can potentially leak secret keys and other sensitive information! Here’s a simple proof of concept of how this attack might be used against your code:
>>> # This is our super secret key: >>>>> class Error:
... def init(self):
... pass
>>> # A malicious user can craft a format string that
>>> # can read data from the global namespace:
>>>>> # This allows them to exfiltrate sensitive information,
>>> # like the secret key:
>>> err = Error()
>>> user_input.format(error=err)
'this-is-a-secret'
See how a hypothetical attacker was able to extract our secret string by accessing the globals dictionary from a malicious format string? Scary, huh? Template strings close this attack vector. This makes them a safer choice if you’re handling format strings generated from user input:
>>> user_input = '${error.init.globals[SECRET]}'Which String Formatting Method Should You Use?
>>> Template(user_input).substitute(error=err)
ValueError:
"Invalid placeholder in string: line 1, col 1"
I totally get that having so much choice for how to format your strings in Python can feel very confusing. This is an excellent cue to bust out this handy flowchart infographic I’ve put together for you:
Python String Formatting Rule of ThumbKey Takeaways
Thanks for reading ❤
If you liked this post, share it with all of your programming buddies!
☞ Python Programming Tutorial - Full Course for Beginners
☞ Python 3's f-Strings: Usage Guide
☞ Python 3's f-Strings: A guide to String Formatting Syntax?
148 0209-3SP_block_1 ['g76p010060q00250r.0005' 'JEBD0507160 REV A' CHNCIII 149 0209-3SP_block_2 ['g76x.3761z-.500p03067q03067f.05' 'JEBD0507160 REV A' CHNC III 150 0209-5SP_block_1 ['g76p020060q00250r.0005' 'JEBD0507160 REV A' CHNC III 151 0209-5SP_block_2 ['g76x.3767z-.48p03067q03067f.05' 'JEBD0507160 REV A' CHNC III 152 0210-3SP_block_1 ['g76p010060q00250r.0005' 'JEBD0507160 REV A' CHNC III | https://morioh.com/p/46d750616fac | CC-MAIN-2020-10 | refinedweb | 1,770 | 64.51 |
Would it be ok to advertise it here?
Typical method: On some specific conditions, master device and slave device can make pair with each other automatically. (This is the default method.)
Hi Michael, Your BT Module is better than mine Specifically because you can change its mode between Master and Slave. I may have to get me one of those
The module must be set to Slave mode, I suspect that your other phones can't see it either but otherwise when it is set correctly it should display a fairly rapidly blinking LED (which is blinking evenly) and then it is 'discoverable' so you can go into bluetooth settings and 'scan for devices' to find it.
Firstly congratulations on purchasing both bits! You purchased the bluetooth module plus the backplane where it is really easy to accidentally purchase just the bluetooth module without the backplane or just the backplane without the bluetooth module everyone should read the posts on my forum at BTInterface.com before purchasing one!
By default when its found your module should show up as 'HC-05' (mine is 'Linvor').
Otherwise its the same as mine, you don't need to connect those extra two pins up, Key or State, you just need the power pins and the TX/RX.
Note that even without the TX & RX pins connected to anything and only with the power pins connected if the module is in the correct mode (Slave mode) then the LED will flash and you can scan for it and find it and pair with it from any bluetooth phone (mine flashes at a constant 4 hertz (ish)).So I suspect that your only problem is that it is in Master mode and as such IT is the one who would do the discovering and not the other phones.
I would leave it set to the default 9600 baud, note that if you change the baud all of a sudden you may start receiving rubbish characters in your serial monitor, that's cause you have to now change the serial monitor's baud rate to match
the problem is your app doesn't connect when other devices can.
Well being a fellow modem guy you obviously know about AT commands, I built a 300baud modem for my commodore 64 once If the device is set to slave mode then the only thing I can think of why it might not work with BTInterface is the baudrate.Should be 9600 for BTInterface which is usually the default for these modules.I suspect that the hardware of these modules is the same and this guy called Byron made a reflasher: flash the software where I believe I could flash the HC-05 software on to my module which I think is the HC-06 (slave only).If it is the same hardware then I can assure you that it works fine on the 5v Arduino, should work between 3.3 and 6v.All BTInterface does is start the Android pairing process which you can do manually if you prefer.Once the device is paired it should be just a normal serial bluetooth device although I have noticed that it always says 'paired but not connected' even while it is sending and receiving to it You say Quotethe problem is your app doesn't connect when other devices can.Do you mean 'where other software can' ? Can you connect other apps but not mine?There's another app called ArduinoCommander, can you get that one to work with your module? on the same Android device?Its just that I'm not sure if its something to do with the Android device or if its specific to BTInterface?
•User can connect 3.3 to 5VDC and connect TX and RX to your control IO (general 3.3 to 5V digital input output of MCU or arduino IO is ok, or general TLL IO)
I really, really, really hate that BTInterface makes a sound when connecting and disconnecting. There appears to be no way to shut up the program.
void loop(){if(command == "btinterface"){ config1() 'if I see the string btinterface then I know that we have a connection, either for the first time or after a disconnect so I'll call the function config1() to configure screen1: intMode = 1 'set variable so that I know I'm in 'config1' mode} 'end of if statementif(command == "b1") { 'then I know that b1 has been pressedif(intMode == 1) { 'then I know that b1 has been pressed while I'm in Mode1 so I'll do a mode1 button1 press thingsend("say Ooh. I see you are in mode one and that the button which currently reads as. Action One!. has been pressed.")send("sms 0123456789 this is a text message from your microcontroller just to let you know that I am currently in Mode 1 and the button Action One! has been pressed.")'do other Mode1 Action One! things here.'perhaps Action One! needs the screen1 to be reconfigured into mode 2? if so then:config2()intMode == 2}} 'end of loopvoid config1(){send("screen1") 'this causes BTInterface to change to screen1, also if BTInterface was sitting in the background this will cause screen1 to appear, it is only while screen1 is displayed that you can configure its controls.send("pad hide")send("sb hide")send("b1 Action One!") 'this changes the text on button1 (b1) to read Action One!send("b2 Action Two!")send("b3 Action Three!")send("b4 Action Four!")'there, I have finished configuring screen1 into 'config1' mode. } 'end of config1() function.
#include <SoftwareSerial.h>#include <Servo.h>#include <stdlib.h>Servo myservo;SoftwareSerial softSerial(10, 11); // RX, TXString command = ""; int r1 = 0;int RelayPin = 2;int val;int pr1 = 3;int pr2 = 4;int pb1 = 5;int pb2 = 6;void setup() { Serial.begin(9600); //Remember to set the Arduino Serial Monitor to 9600 Baud and no line ending if you want to get AT commands to work before you connect. softSerial.begin(9600); // SoftwareSerial "com port" data rate. JY-MCU v1.03 defaults to 9600. softSerial.println("Arduino Ready!"); pinMode(2, OUTPUT); pinMode(pr1, INPUT_PULLUP); pinMode(pr2, INPUT_PULLUP); pinMode(pb1, INPUT_PULLUP); pinMode(pb2, INPUT_PULLUP); digitalWrite(RelayPin, LOW); myservo.attach(9); } void screen1set(){ send("screen1"); delay(300); } void loop(){ if (Serial.available()) softSerial.write(Serial.read()); while(softSerial.available() > 0) { // While there is more to be read, keep reading. command += (char)softSerial.read(); delay(10); } if(command != "") Serial.println(command); if(command == "btinterface") screen1set(); // if(digitalRead(pr1) == LOW){ send("say you pressed, red? 1."); delay(3000); send("say stand by. I shall initiate the turning on and off of relay 1."); delay(5000); send("say in. 5. 4. 3. 2. 1. "); delay(6000); command = "b1"; } if(digitalRead(pr2) == LOW){ send("say self destruct sequence aborted. phew!"); delay(500); } if(digitalRead(pb1) == LOW){ send("say no? that was a blackwon. stupid?"); delay(500); } if(digitalRead(pb2) == LOW){ send("say self destruct sequence has been initiated. this system will self destruct in 30 seconds. press any red button to abort self destruct."); delay(500); } if(command.startsWith("sb")){ // Then the slidebar has been moved so alter the position of the servo! val = stringToNumber(command.substring(2)); val = map(val, 100, 0, 0, 179); myservo.write(val); send("l4 " + command.substring(2)); } if(command == "b2") { send("sms 07951123456 This apparatus will self destruct in 10 seconds!"); digitalWrite(RelayPin, HIGH); } if(command == "b1"){ if(r1 == 0){ r1 = 1; send("b1 Relay Off"); digitalWrite(RelayPin, HIGH); delay(500); return; } if(r1 == 1){ r1 = 0; send("b1 Relay On"); digitalWrite(RelayPin, LOW); delay(500); return; } } if(command == "b3") send("say Hello Ian. Remember to get to work on be tee interface?"); command = ""; }// End Loopint stringToNumber(String thisString) { int i, value = 0, length; length = thisString.length(); for(i=0; i<length; i++) { value = (10*value) + thisString.charAt(i)-(int) '0'; } return value;}void send(String s){ softSerial.println(s);}
Like the other app, the text output area is flawed in that it doesn't scroll to the bottom when you get new output
Please enter a valid email to subscribe
We need to confirm your email address.
To complete the subscription, please click the link in the
Thank you for subscribing!
Arduino
via Egeo 16
Torino, 10131
Italy | http://forum.arduino.cc/index.php?topic=145241.msg1101816 | CC-MAIN-2016-44 | refinedweb | 1,381 | 63.39 |
Subprocess module in Robofont
Hello RoboUsers!
I would like to use the subprocess python module in Robofont.
If I launch the following script from sublime/terminal:
import subprocess subprocess.call(['vfb2ufo', 'font.ufo'])
this is the output that I receive:
VFB2UFO Converter Copyright (c) 2013-2015 by Fontlab Ltd. Build 2015-01-23
Otherwise, launching the same file from Robofont, I receive this output:
Traceback (most recent call last): File "test_subprocess.py", line 2, in <module> File "subprocess.pyc", line 524, in call File "subprocess.pyc", line 711, in __init__ File "subprocess.pyc", line 1308, in _execute_child OSError: [Errno 2] No such file or directory
There seems to be a problem related to paths, but using something like:
os.path.join(os.getcwd(), ‘font.ufo’)
doesn’t fix the problem.
Any tips?
Actually I’d like to use the vfb2ufo tool someway, so if there’s an alternative to subprocess module is of course welcome.
Thanks
Solved:
from mojo.compile import executeCommand print executeCommand(['vfb2ufo', 'font.ufo'], shell=True)
Solved:
from mojo.compile import executeCommand print executeCommand(['vfb2ufo', 'font.ufo'], shell=True)
yep, using
executeCommandsolves your issue when
shellis set to
True | https://forum.robofont.com/topic/370/subprocess-module-in-robofont | CC-MAIN-2020-40 | refinedweb | 192 | 52.56 |
Hi,
I am developing an app that monitors and corrects the user input based on some rules.
I am reading the events from keyboard with the keyboard python module.
I faced some problem when the user types very fast, as regards some overlays of text. By this I mean that when my app writes the correct input, the user continues writing and may writes before the corrector types the whole word.
I found, that I can start a keyboard hook with suppressed output to screen and tried to implements a solution.
In the above code I tried recreating the problem and tried giving the general idea.
import keyboard from collections import deque string : str = "" counter : int = 0 is_suppressed: bool = False # this indicates if letters are shown in the window or not suppressed_string: str = "" q = deque() # this is used as a buffer, and stores the words that are entered when the # program is correcting def keyboard_module_write_to_screen(is_suppressed, string): for i in range(len(string) + 1): print("--Pressing backspace--") keyboard.press_and_release('backspace') for i, char in enumerate (string): # simulating a calculation final_char_to_be_written = char.upper() print("---WRITING THE CHAR -> {} ---".format(final_char_to_be_written)) keyboard.write(final_char_to_be_written) for i in range(30): keyboard.write('*') keyboard.write(' ') def monitoring(event): global counter, string, is_suppressed, suppressed_string if (event.event_type == keyboard.KEY_DOWN): # and event.name != 'backspace'): print("-String entered : {}".format(event.name)) if (event.name == 'space'): # if space is button a new word is entered if (is_suppressed is True): # if program occupied writing to the screen save the word to the buffer q.appendleft(suppressed_string) suppressed_string = "" elif (is_suppressed is False): # check and write first from the deque, # write the word(s) that were stored in the buffer before writing current # input string # haven't find a way to do the above alongside the others keyboard.unhook_all() keyboard.hook(monitoring, suppress = True) is_suppressed = True keyboard_module_write_to_screen(is_suppressed, string) keyboard.unhook_all() keyboard.hook(monitoring, suppress = False) is_suppressed = False counter = 0 string = "" elif (event.name in "abcdefghijklmnopqrstuvwxyz") : if (is_suppressed is True): suppressed_string = ''.join([suppressed_string, event.name]) print("########## SUPPRESSED_STRING = {} #########".format(suppressed_string)) counter = counter + 1 print("-- COUNTER is : {}".format(counter)) string = ''.join([string, event.name]) elif (event.name == "]"): print(q) elif (event.name == 'backspace'): pass keyboard.hook(monitoring, suppress = False)
The main thing I want to achieve is
1)while correcting - writing to the window, read events and save them to a buffer
2)when correcting - writing is done check the buffer, write it's content, but keep reading events
3)if buffer empty and currently not writing something, read events etc.
I didn't manage to make it work and produce the desired result.
Any advice on how to make it work, would be useful.
Thanks in advance for any help. | https://www.daniweb.com/programming/software-development/threads/522274/suppress-input-while-writing-to-window-python | CC-MAIN-2022-33 | refinedweb | 447 | 57.06 |
A long time ago I wanted to show similar/related posts at the end of each post on this blog. At the time, Hugo didn't have built in support to show related posts (nowadays it has). So I decided to implement my own using python, sklearn and Clustering.
Program design
Reading & Parsing posts
Since I write in English and Spanish, I needed to train the model twice, in order to only show English related post to English readers and Spanish ones to Spanish readers. To achieve it, I created a
readPosts function that takes in as parameters a path where the post are, and a
boolean value indicating whether I want related posts for English or Spanish.
dfEng = readPosts('blog/content/post', english=True) dfEs = readPosts('blog/content/post', english=False)
Inside this function (you can check it on my github), I read all the English/Spanish posts and return a Pandas Data Frame. The most important thing this function does is select the correct parser, to open files using a yaml parser or a TOML parser. Once the frontmatter is read,
readPosts makes a DataFrame using that metadata. It only takes into account the following metadata:
tags = ('title', 'tags', 'introduction', 'description')
This is the information that will be used for classifying.
Help me keep writing
Model Selection
As I said at the beginning of the post, I decided to use the Clustering technique. As I am treating with text data, I need a way to convert all this data to numeric form, as clustering only works with numeric data. To achieve it, I have to use a technique called TF-IDF. I won't delve into the details of this technique, but give you a short introduction to it.
What is TF-IDF (Term Frequency - Inverse Document Frequency)
When working with text data, many words will appear for multiple documents of multiple classes, this words typically don't contain discriminatory information. TF-IDF aims to downweight those frequently appearing words in the data (In this case, the Pandas Data Frame).
The tf-idf is defined as the product of:
- The term frequency. Number of times a term appears in a document.
- The Inverse document frequency. How much information the word provides taking into account all documents, that is, if the term is common or rare across all documents.
Multiplying the above values gives the tf-idf, quoting Wikipedia:
A high weight in tf–idf is reached by a high term frequency (in the given document) and a low document frequency of the term in the whole collection of documents; the weights hence tend to filter out common terms. Since the ratio inside the idf's log function is always greater than or equal to 1, the value of idf (and tf-idf) is greater than or equal to 0. As a term appears in more documents, the ratio inside the logarithm approaches 1, bringing the idf and tf-idf closer to 0.
In short, as more common a term is across all documents, less tf-idf score it will have, signaling that this word is not important for classifying.
Hyper-Parameter Tunning
To select the appropriate parameters for the model I've used sklearn's GridSearchCV method, you can check it on line 425 of my code.
Cleaning the Data
Now that I have decided what method use (clustering) and how convert the text data to a vector format (TF-IDF), I have to clean the data. Usually, when dealing with text data you have to remove words that are used often, but doesn't add meaning, those words are called stop words (the, that, a etc). This work is done in generateTfIdfVectorizer. In this process I also perform a stemmization of the words. From Wikipedia, Stemming is the process of:
Reducing inflected (or sometimes derived) words to their word stem, base or root form—generally a written word form.
Depending on which language I am generating the related posts for (English or Spanish) I use
def tokenizer_snowball(text): stemmer = SnowballStemmer("spanish") return [stemmer.stem(word) for word in text.split() if word not in stop]
for Spanish or
def tokenizer_porter(text): porter = PorterStemmer() return [porter.stem(word) for word in text.split() if word not in stop]
for English.
After this process, finally I have all the data ready to perform clustering.
Clustering
I've used KMeans to do the clustering. The most time consuming task of all this process was, as usual, clean the data, so this step is simple. I just need a way of know how many clusters I should have. For this, I've used the Elbow Method. This method is an easy way to identify the value of
k (How many clusters there are.) for which the distortion begins to increase rapidly. This is best shown with an image:
After executing the model, using 16 features, this are the ones selected for Spanish:
[u'andro', u'comand', u'curs', u'dat', u'desarroll', u'funcion', u'googl', u'jav', u'libr', u'linux', u'program', u'python', u'recurs', u'script', u'segur', u'wordpress']
and the ones used for English:
[u'blogs', u'chang', u'channels', u'curat', u'error', u'fil', u'gento',u'howt', u'list', u'lists', u'podcasts', u'python', u'scal', u'scienc', u'script', u'youtub']
How I integrated it with Hugo
This was a tedious task, since I had to read the output of the model (in CSV format) into hugo and pick 10 random post from the same cluster. Although is no longer required to use this, I want to share how I integrated this approach with Hugo to show related posts:
{{ $url := string (delimit (slice "static/" "labels." .Lang ".csv" ) "") }} {{ $sep := "," }} {{ $file := string .File.LogicalName }} {{/* First iterate thought csv to get post cluster */}} {{ range $i, $r := getCSV $sep $url }} {{ if in $r (string $file) }} {{ $.Scratch.Set "cluster" (index . 1) }} {{ end }} {{ end }} {{ $cluster := $.Scratch.Get "cluster" }} {{/* loop csv again to store post in the same cluster */}} {{ range $i, $r := getCSV $sep $url }} {{ if in $r (string $cluster) }} {{ $.Scratch.Add "posts" (slice $r) }} {{ end }} {{ end }} {{ $post := $.Scratch.Get "posts" }} {{/* Finally, show 5 randomly related posts */}} {{ if gt (len $post) 1 }} <h1>{{T "related" }}</h1> <ul> {{ range first 5 (shuffle $post) }} <li><a id="related-post" {{ printf "href=%q" ($.Ref (index . 2)) | safeHTMLAttr }} {{ printf "title=%q" (index . 3) | safeHTMLAttr }}>{{ index . 3 }}</a></li> {{ end }} </ul> {{ end }}
If you have any comments, or want to improve something, comment below.
References
Spot a typo?: Help me fix it by contacting me or commenting below! | https://elbauldelprogramador.com/en/related-posts-hugo-sklearn/ | CC-MAIN-2018-43 | refinedweb | 1,096 | 61.26 |
If mix the Felgo and Qt components freely in your mobile app. So if you need for example push notifications in your Qt app which uses Qt Quick Controls 2, you can use your existing code and simply add the Google Cloud Messaging (Firebase) push notification plugin by Felgo like this:
import QtQuick 2.0 import QtQuick.Controls 2.0 import Felgo 3.0 ApplicationWindow { GoogleCloudMessaging { onNotificationReceived: { console.debug("Received push notification: ", JSON.stringify(data)) } } }
This blog post is about the exact steps how you can add Felgo to your Qt Mobile App. Let’s get started.
How does Felgo Improve Qt?
The Felgo SDK for cross-platform apps and games comes with many components to make mobile development with Qt easier. Felgo Felgo improves Qt for mobile app developers and a comparison between Qt and Felgo, you can also see this page.
It is possible to use all Felgo features in your Qt Quick application, no matter if it is based on Qt Quick Controls 1 or Qt Quick Controls 2. It is even possible to use the Felgo components in your Qt Widget based app – you can contact us here if you’d like to get an example project how to do this.
All Felgo Components can be freely mixed with other QML Items in your mobile app. To make the learning curve easier, Felgo comes with many open-source demo and example projects you can use as a starting point for your own projects.
However, it is not only possible to create Felgo projects from scratch – you can even use Felgo Engine in your existing Qt Quick applications to take advantage of the included features in your Qt app. This article will guide you through all the important steps to successfully use Felgo in your Qt application:
- Add Felgo to your existing Qt Installation
- Integrate Felgo in your Qt Quick Project
- Use Felgo Components in your QML Code
- Access Native Device Features in your Qt App
- Integrate Third-party Services like AdMob in your Mobile App
Add Felgo to your existing Qt Installation
If you already installed Qt 5 on your system, we still recommend to install Felgo in parallel using our installer to avoid any Qt version compatibility issues. The Felgo and Qt installation will work side-by-side in two different directories.
If you still would like to add Felgo to your existing Qt installation, first Felgo installation repository with OK and proceed with the Add or remove components option by pressing the Continue button.
5. Felgo Engine will now show up in addition to the available Qt modules. Make sure that it is checked in the Package Manager and proceed with the MaintenanceTool to install Felgo.
Note: Each Felgo version is compatible with a certain Qt version. See the Felgo Update Guide for the currently used Qt version of Felgo.
Alternatively, you can also download the Felgo installer which comes with the supported Qt version automatically. You can install Felgo side-by-side with Qt in a different directory. They will both work independently from each other then.
6. Choose Done when the installation has finished to close the installer.
7. You can now open Qt Creator and log-in to your Felgo Account on the new welcome page, which was added by installing Felgo.
8. If you do not have a Felgo Account yet, you can create a new account on the Felgo Website. Open the downloads page and choose Download to open the sign-up popup, which will guide you through the registration. As you already added Felgo to your Qt installation, you can skip the actual download after you’ve completed the sign-up.
After you’ve successfully installed the Felgo SDK and logged into your Felgo account with Qt Creator, you can start using Felgo in your projects.
Integrate Felgo in your Qt Quick Project
To correctly link and initialize the Felgo components, some additional steps are required for each project that uses Felgo. First, open your Qt Quick Project with Qt Creator. You can then follow these steps to add Felgo:
1. Modify your .pro file configuration to link the Felgo SDK to your project
CONFIG += felgo
2. Open the main.cpp of your project and initialize Felgo
#include <QGuiApplication> #include <QQmlApplicationEngine> #include <FelgoApplication> // 1 - include VPApplication int main(int argc, char *argv[]) { QGuiApplication app(argc, argv); FelgoApplication felgo; // 2 - create FelgoApplication instance QQmlApplicationEngine engine; felgo.initialize(&engine); // 3 - initialize Felgo engine.load(QUrl(QStringLiteral("qrc:/main.qml"))); return app.exec(); }
3. In your main.qml, use Felgo GameWindow or App as your main ApplicationWindow (this is required for license validation and other checks)
import Felgo 3.0 GameWindow { licenseKey: "" // ... }
Alternatively, you can also add a GameWindowItem as a child to your main window instead:
import Felgo 3.0 ApplicationWindow { // ... GameWindowItem { licenseKey: "" } }
4. Add a new file qml/config.json to your project resources, and configure your correct application id and version:
{ "title": "Felgo App", "identifier": "net.vplay.demoapp", "orientation": "auto", "versioncode": 1, "versionname": "1", "stage": "test" }
5. When activating Felgo Felgo in your QML application.
You can download this example from GitHub as a reference project too:
Use Felgo Components in your QML Code
Using Felgo in the QML files of your Qt Quick application is easy. All Felgo types are available as QML components and can be used just like any other Item in QML. Mixing Felgo types with Qt components like Qt Quick Controls 2 is also possible.
For example, if you like to add add an easy-to-use local storage to your application, simply add the Felgo import and use the Storage component of Felgo:
import Felgo 3 Felgo Game Network service to synchronize the stored data across devices of the same user:
import Felgo 3.0 FelgoGameN Felgo Components offer the same ease-of-use, for an overview of all available types and features, please have a look at the online documentation of Felgo.
Access Native Device Features in your Qt App
Another big advantage of Felgo is the possibility to use native device features like the camera or confirmation and input dialogs – the NativeUtils component is all you need.
This example shares a custom text and url with the native share dialog on Android and iOS:
import Felgo 3.0 App { AppButton { text: "Share!" onClicked: nativeUtils.share("Felgo Felgo
Felgo Plugins allow to integrate leading third-party services for ads, analytics, push notifications and more. All native plugins are available to use in your Qt Quick application and offer a convenient way to fast-forward app and game development for iOS and Android.
Since Felgo 2.11.0 it is even possible to fully use Felgo including all monetization plugins with the free Personal Plan of Felgo. Adding services like Google AdMob, Chartboost or Soomla In-App Purchases has never been easier.
a. Native Plugin Integration in Qt
With a single import and QML Item, you can for example show an AdMob Banner in your app:
import Felgo 3 Felgo Felgo Felgo. However, without having a valid Felgo License Key set, only trial-modes are available for all plugins. To be able to set the GameWindowItem::licenseKey with a valid key, let’s first create a new key for the app.
After signing in to your Felgo Felgo in your Qt applications, don’t hesitate to contact us at [email protected] or in the support forums.
The full example of this guide also available on GitHub:
More Posts Like This
Release 2.12.1: Visual Editor & Qt Quick Designer Improvements
How to Make Cross-Platform Mobile Apps with Qt – Felgo Apps
Release 2.11.0: All Monetization Plugins Now Free, Plugin Trial & Firebase Qt Plugin
WeAreDevelopers Conference App – Open Source & Live Now! | https://felgo.com/cross-platform-development/how-to-add-felgo-to-your-qt-mobile-app | CC-MAIN-2019-39 | refinedweb | 1,291 | 53.71 |
Type: Posts; User: junaidsherief
Thanks dglienna for this fast reply
I did'nt think it is so easy
Shell App.Path & "\project1.exe" , vbHide
worked
One more doubt, can we create formless vb application (VB6)
Thanks to all, solved
Solved
I have one VB program abc.exe, I want to call another VB program xyz.exe when a menu item is clicked from abc.exe.
xyz.exe has its own forms (3 forms)
thanks in advance
Thanks, Solved
Iam using Turbo C++ 3.0 - the following code executed without any problems, but
the file not seen created in the hard disk
#include<iostream.h>
#include<conio.h>
#include<fstream.h>
int main()...
I am using Turbo C++ version 3.0, see the following code,
when compiled, causes declaration syntax error
#include<iostream.h>
#include<conio.h>
#include<fstream.h>
using namespace std;
int...
I am using Turbo C version 3.0, see the following code,
when compiled, causes declaration syntax error
#include<iostream.h>
#include<conio.h>
#include<fstream.h>
using namespace std;
int...
I am using Crystal reports 11 for designing reports - Visual Studio 2005 is the application.
Report preview shows correctly shows the records correctly as in the underlying database
but when...
Hi,
I have a VB 6.0 program, which monitors a table for some kind of reports,
if a request pending, it invokes a php resource in the same machine, using webbrowser control.
the php code...
Hi, my collegue leaving the company. he is now using company email server, with 1000's mails in inbox/sent etc etc.
he want to transfer this mail to his laptop to read later using microsoft...
button_click()
{
// write the code for delete from table...
important is that, only when user click yes this even will be fired
}
dear ujja..
first of all you can play a media file from web page simply by requesting
this as a resource , more specifically <a href=xyz.wav>aek do theen..</a>
will start windows media player...
Give full code, including declarative syntax for file control declarion
Hello
I want deploy my asp.net application to the production web server
I want to hide the code behind files in all pages.
how can I use the deployment wizard.
I added another projet of type...
I know using validation controls, but want to try javascript
<%@ Page Language="vb" AutoEventWireup="false" CodeBehind="Default.aspx.vb" Inherits="HtmlEtc._Default" %>
<!DOCTYPE html PUBLIC...
code contains no error
asp.net v2.0
I have one asp.net page, which contains 4 text boxes(asp.net controls),
and one button control.
I wrote validation logic for the text boxes in javascript. I want to execute
this...
using win 2003 POP3 service is far bette than SQL mail.
If your costom controls are performing fine in respects then no need to change this VS 2005 controls, dont mees of it things when it is ok
Windows 2003 server shipped with a POP3 SERVER - search with
google using these keywords.
Once it is configured you can use this as mail server(max 50 users, I think so)
Helo
I am using asp.net 2.0
now a simple qn:
When I build solution: I get my error messages mingled with warnings
(running into 100 of entries).
I want to show errror messages seperately, so...
hi i am using asp.net 3.5
I have a treeview, which shows continents and contries of the world.
contries shown as leafnode(parent node continent) - contry node contains
countryname as text and...
First of all I disagree with your idea of using systems folders for review transaction/adding new inventory item. for this purpose use database, in a networked environments using system folder for...
explain ur intended purpose, how do u expect it to work.
Adam application pooling is possible.
Give detailed working mechanism u expect and objective | http://forums.codeguru.com/search.php?s=d8903892fc602e18599d71af5a89d33c&searchid=7390665 | CC-MAIN-2015-32 | refinedweb | 642 | 68.26 |
Complex time series II, web diagrams.
Continuing with the logistic equation, which, remember, generates the following series:
Xn+1 = µXn(1-Xn)
Which presents periodic dynamics when μ has a value between 3 and approximately 3.5, and chaotic dynamics from this latter value, you can generate the following diagram:
The parabola represents the value of the function Xn from the values between the minimum and maximum value of the series, given a certain value of the parameters. In the case of GraphStudy, the procedure for obtaining this graph is to first generate the time series, with the Series button, and then use the Web button to generate the web graph. In GraphStudy the full parabola is not drawn, but only the part of it where the function takes values with the given parameters and initial values.
In the graph, there is also drawn a diagonal line representing the points where the abscissa and ordinate have the same value.
The horizontal axis corresponds to the values of Xn, whereas the vertical represents the value of Xn+1. Thus, taking any Xn point, you can draw a vertical line up to the corresponding value of Xn+1, located on a point in the parabola.
Then, a horizontal line is drawn from this point to the diagonal line. The direction of this line depends on the side of the parabola in which is the starting point. With this, the point is placed again on the horizontal axis, on the new Xn. Then, a new vertical line is drawn again from here to the corresponding Xn+1 point on the parabola.
After repeating this process numerous times, you obtain a characteristic figure indicating if the series converges to a single value, has one or more periodic cycles, or seems that it tends to cover the entire graph, indicating a chaotic dynamics.
To draw a web diagram, first you must generate the series with the desired parameters and initial value and then click the left mouse button on the web diagram at the point where you want to start to draw the diagram. You can select the number of steps you want to perform, writing it in the text box Steps, and select the color you want to draw the graph.
Giving the μ parameter a value within the area of stationary dynamics of the series, for example 2.5, you obtain the following diagram:
Where it can be seen that the series converges to a single point. If now you give a value in the periodic zone, for example 3.3, you obtain the following diagram:
Where it can be seen that the series just draws a square, continuously bouncing between two fixed values.
If now you give to μ a value in the area of chaotic dynamics, for example, 3.8, you obtain the characteristic diagram of this dynamic, an erratic path that passes virtually for all points on the graph:
The triangular application
We can review another example of series presenting different behavior in terms of its parameters, the triangular application. This is the formula:
Xn+1 = µ(1-2abs(0,5-Xn))
Giving to μ values in the interval (0, 1). If you give μ a value less than 0.5, the system is stationary; the series converges to a single value. However, with values greater than 0.5, always present chaotic dynamics. This is the basic diagram of this application:
With μ = 0.4 and an initial value for X of 0.0001, you obtain a series which converges to 0:
And the corresponding web diagram also indicates this:
However, with μ = 0.9, the things change a lot. This is the series, with an initial value of 0.1:
And this is the web diagram, which clearly indicates the presence of chaos:
With the R program, you can draw these graphs easily, for example with the following code:
fweb<-function(u,x) {
return (u*x*(1-x));
}
webdiagram<-function(p, steps=500) {
xn<-seq(0,1,length.out=1000);
xn1<-sapply(xn,fweb,u=p);
plot(xn,xn1,type='l',col="red");
lines(xn,xn,lty=4);
x0<-runif(1);
xn<-x0;
xn1<-0;
for (i in 1:steps) {
xf<-fweb(p, x0);
xn<-c(xn,x0);
xn1<-c(xn1,xf);
xn<-c(xn,xf);
xn1<-c(xn1,xf);
x0<-xf;
}
lines(xn,xn1);
}
Inside the function fweb you can write the operations that will generate the series, the web diagram is drawn with the webdiagram function. In this case, we are considering functions with a single parameter, but the code can be easily modified to use an arbitrary number of parameters.
Finally, I will recommend you a book that is a classic, where you can find more about this and many other topics: Fractals Everywhere, from Michael F. Barnsley.
The next article in the series will focus on phase diagrams and attractors. | http://software-tecnico-libre.es/en/article-by-topic/all-sections/all-topics/complex-systems-graphic-analysis/complex-time-series-II | CC-MAIN-2020-29 | refinedweb | 815 | 50.57 |
I am trying to learn regular expressions and have a question for the following codes
Why does this code allow everything return (preg_match_all ("/[a-z]\'\-\s*/", $testString));
Why does this allow nothing return (preg_match_all ("/^[a-z]\'\-\s*$/", $testString));
What test strings are you using?
The difference between the two is quite simple, the latter requires the beginning of the string to match a-z (lowercase) and end with at least 1 space.
Then what do I do to make the 1st code only allow whats listed?
Tell me what strings you want captured and what you do not want captured and I can answer that.
That is for first and last name, I want to allow for spaces, hyphens and apostrophes. Maybe you clarify my assumptions about regular expressions. Does it return 0 when a character is inserted that is not in the argument or does it just check to see if the listed characters are there?
There, fixed that for you
I think what you're looking for this
~^[a-zA-Z'\\s-]+$~
Please note that this does not allow for special characters like the é in my name. For names I mostly find it easiest to just strip_tags to prevent script injection and be done with it. | https://www.sitepoint.com/community/t/regular-expressions/24494 | CC-MAIN-2017-09 | refinedweb | 209 | 65.86 |
Introduction: Web Controlled Rover
Building and playing with robots is my main guilty pleasure in life. Others play golf or ski, but I build robots (since I can't play golf or ski :-). I find it relaxing and fun! To make most of my bots, I use chassis kits. Using kits helps me do what I like doing more, the software and electronics and also makes for a better chassis for my all-thumbs self.
In this Instructable, we will look in what it takes to make a simple but robust Wifi/web controlled rover. The chassis used is the Actobotics Gooseneck. I chose it for it's size, expand-ability and cost but you can use any other chassis of your own choosing.
For a project like this, we will need a good solid single board computer and for this bot I chose to use the Raspberry Pi (RPI) a Linux based computer. The RPI (and Linux) gives us lots of coding options and Python will be used for the coding side. For the web interface I use Flask, a lightweight web framework for Python.
To drive the motors, I chose a RoboClaw 2x5a. It allows for simple serial communication for commanding it and works well with the RPI and the motors on the Gooseneck.
Finally, it has a webcam for POV type video feedback for driving it remotely. I will cover each topic in more detail later.
Step 1: Hardware Needed
- Actobotics Gooesneck chassis or a suitable replacement of your choice
- Raspberry Pi of your choice (or clone) - An RPI model B is used on this bot, but any with at least two USB ports will work
- Standard Servo Plate B x1
- 90° Single Angle Channel Bracket x1
- RoboClaw 2x5a motor driver
- S3003 or similar standard size servo
- Small breadboard or Mini breadboard
- Female to Female jumper wires
- Male to Female jumper wires
- Web cam (optional) - I use a Logitech C110, and here is a list of supported cams for the RPI
- 5v-6v power source for servo power
- 7.2v-11.1v battery for drive motor powering
- 5v 2600mah (or higher) USB power bank for the RPI
- USB Wifi adapter
On my bot, I use 4" wheels to make it a little more All-Terrain-Indoor. For this option you will need:
Step 2: Assembling the Chassis
First assemble the chassis following the instructions included with the chassis or video. After finishing you should have something like the image. NOTE: When assembling the Neck part, just leave the mounting bracket off.
On my bot, I chose to replace the wheels that the chassis came with for 4" heavy duty wheels. This is optional and not needed unless you want to do the same.
Step 3: Mounting the Electronics
The Gooseneck has a lot of room and options for mounting your electronics. I give you these pictures as a guide line, but you can choose how you would like to lay it all out. You can use stand-offs, double-sided tape, Velcro or servo-tape to mount the board and batteries.
Step 4: Adding the Webcam
Take the 90 degree bracket, lightweight servo hub and four (4) of the .3125" screws for this step:
- Take the servo hub and place it on one side of the bracket and secure them together with the .2125" screws like pictured
- Next mount the servo into the servo bracket
- Attach the 90 degree bracket with the servo horn to the servos spine and use the horn screw that came with the servo to connect them together
- Now mount the Servo in bracket onto the top of the goose-neck with the remaining screws
- Mount camera with zip-ties or double sided tape on to the 90 degree bracket
Use the pictures for guides if needed.
Step 5: Wiring It All Up
The wiring is fairly strait forward for this robot.
The Motors:
- Solder leads on both motors if you have not done so already
With the robots front (the end with the goose-neck) facing away from you:
- Connect the motor wires on the left motor to the channel M1A and M1B
- Connect the motor wires on the right motor to the channel M2A and M2B
Ground (GND) connections:
- Connect one ground pin on the RoboClaw to the ground jumper board. The ground pin line on the RoboClaw is closest to the center (See pic)
- Connect PIN 6 on the RPI to the jumper board. See the RPI header pic for pin assignments.
- Connect the GND from the servo battery pack to one of the pins on the jumper board.
- Run a jumper wire from the jumper board to the servos GND wire.
RPI to RoboClaw:
- Connect the RPI GPIO14 TXD pin to RoboClaw S1 pin
Power:
- Connect the POS wire from the servo battery to the servos POS lead
- Connect the POS wire from the motor battery to POS (+) of the RoboClaw motor power input terminal. We will leave the GND terminal disconnected for now.
Step 6: Setting Up the RPI
I assume the user here knows some about Linux and the RPI. I do not cover how to setup or connect to one. If you need help with that then use the pages below.
To get your RPI setup, have a look at the following pages:
For general jump-off pages, The RPI main page and the eLinux pages are great places to start.
See this link for RPI general Wifi setup.
If you plan on using some sort of camera or web cam on the bot, have a look at these pages to get the basic needed files.
Streaming video:
There are a few ways to get video streaming working on a RPI, but the method I prefer is using Motion.
To install it on your RPI run this: sudo apt-get install motion
This instrucatable goes over setting it up for streaming as well.
Step 7: Configuring the RPI Serial Port
Step 8: Installing the Python Modules
You will need python installed on the RPI as well as the python package installer pip.
To install pip do:
- sudo apt-get install python-setuptools
- sudo easy_install pip
Then:
- sudo pip install flask
- sudo pip install pyserial
- sudo pip install RPIO
This will be all the modules needed for the code to run.
Step 9: Setting Up the RoboClaw
I have the robot code talking to the RoboClaw in Standard Serial Mode at 19200 baud.
To set the RoboClaw up for this do:
- Hit the "MODE" button on the RoboClaw
- Hit the set button until the LED flashes 5 (five) times between the delays
- Hit the "LIPO" button to store
- Next hit the "SET" button until the LED flashes 3 (three) times between the delays
- Hit the LIPO button to store
That's it for setting up the motor controller. See the pdf linked above for more info if needed.
Step 10: Installing the Rover Program/files
Download and copy the rover.zip file to your RPI in your pi user directory.
If you are running Linux or a Mac, you can use 'scp' to do it:
scp ~/location/of/the/file/rover.zip pi@your_rpi_ip:/~
For Windows, you can download and use pscp and then do:
pscp /location/of/the/file/rover.zip pi@your_rpi_ip:/~
Once the zipfile is copied over to the RPI, log into it as the pi user.
Now run:
unzip rover.zip
This will unzip the files to a folder named 'rover' and have the following under that folder:
- restrover.py (The python code for the robot)
- static (holds the image files for the buttons on the control page)
- templates (holds the index.htlm file, the control web page)
If you are using a web cam, modify line near the bottom of the index.html file in the template folder. Change the URL in the IFRAME line to match the src URL for your video stream.
Step 11: Starting the Bot Up
Connect the USB power to the RPI.
To start the bot code up, log in as the pi user and run:
- cd rover
- sudo python restrover.py
If all was OK, you should see a screen similar to the image in this step
If you see any errors or issues, you will have to fix them before going forward.
Now, connect the the GND (-) wire to the NEG (-) terminal on the RoboClaw motor power input.
Step 12: Accessing the Bot Control Page
After the robot's python script is running, power up the RoboClaw and then navigate to your RPI's ip like:
You should see the Web control page pop up like in the images. If not, check your RPI output terminal and look for any errors and correct them.
Once on the page, you are ready to control the bot.
The robot will start in the "Med run" setting and at the Medium speed.
The bot can be controlled via the buttons on the page or by keys on the keyboard.
The keys are:
- w - forward
- z - reverse/backward
- a - long left turn
- s - long right turn
- q - short left turn
- e - short right turn
- 1 - pan camera left
- 2 - pan camera right
- 3 - pan full left
- 4 - pan full right
- / - home/center camera
- h - halt/stop robot
There is an half second delay buffer between commands sent. I did this to eliminate unwanted repeated commands. You can of course remove this from the code if you like (in index.html)
The rest of the controls and controlling it should be self explanatory.
Step 13: The Python/Flask Code
This bot uses Python and the Flask web framework. You can learn more about Flask here if you are interested.
The big difference from a Flask app and normal Python script is @app.route class/method used to do the URI handling. Other than that it's pretty much normal Python for the most part.
#!/usr/bin/env python # # Wifi/Web driven Rover # # Written by Scott Beasley - 2015 # # Uses RPIO, pyserial and Flask # import time import serial from RPIO import PWM from flask import Flask, render_template, request app = Flask (__name__, static_url_path = '') # Connect to the comm port to talk to the Roboclaw motor controller try: # Change the baud rate here if different than 19200 roboclaw = serial.Serial ('/dev/ttyAMA0', 19200) except IOError: print ("Comm port not found") sys.exit (0) # Speed and drive control variables last_direction = -1 speed_offset = 84 turn_tm_offset = 0.166 run_time = 0.750 # Servo neutral position (home) servo_pos = 1250 servo = PWM.Servo ( ) servo.set_servo (18, servo_pos) # A little dwell for settling down time time.sleep (3) # # URI handlers - all the bot page actions are done here # # Send out the bots control page (home page) @app.route ("/") def index ( ): return render_template ('index.html', name = None) @app.route ("/forward") def forward ( ): global last_direction, run_time print "Forward" go_forward ( ) last_direction = 0 # sleep 100ms + run_time time.sleep (0.100 + run_time) # If not continuous, then halt after delay if run_time > 0: last_direction = -1 halt ( ) return "ok" @app.route ("/backward") def backward ( ): global last_direction, run_time print "Backward" go_backward ( ) last_direction = 1 # sleep 100ms + run_time time.sleep (0.100 + run_time) # If not continuous, then halt after delay if run_time > 0: last_direction = -1 halt ( ) return "ok" @app.route ("/left") def left ( ): global last_direction, turn_tm_offset print "Left" go_left ( ) last_direction = -1 # sleep @1/2 second time.sleep (0.500 - turn_tm_offset) # stop halt ( ) time.sleep (0.100) return "ok" @app.route ("/right") def right ( ): global last_direction, turn_tm_offset print "Right" go_right ( ) # sleep @1/2 second time.sleep (0.500 - turn_tm_offset) last_direction = -1 # stop halt ( ) time.sleep (0.100) return "ok" @app.route ("/ltforward") def ltforward ( ): global last_direction, turn_tm_offset print "Left forward turn" go_left ( ) # sleep @1/8 second time.sleep (0.250 - (turn_tm_offset / 2)) last_direction = -1 # stop halt ( ) time.sleep (0.100) return "ok" @app.route ("/rtforward") def rtforward ( ): global last_direction, turn_tm_offset print "Right forward turn" go_right ( ) # sleep @1/8 second time.sleep (0.250 - (turn_tm_offset / 2)) last_direction = -1 # stop halt ( ) time.sleep (0.100) return "ok" @app.route ("/stop") def stop ( ): global last_direction print "Stop" halt ( ) last_direction = -1 # sleep 100ms time.sleep (0.100) return "ok" @app.route ("/panlt") def panlf ( ): global servo_pos print "Panlt" servo_pos -= 100 if servo_pos < 500: servo_pos = 500 servo.set_servo (18, servo_pos) # sleep 150ms time.sleep (0.150) return "ok" @app.route ("/panrt") def panrt ( ): global servo_pos print "Panrt" servo_pos += 100 if servo_pos > 2500: servo_pos = 2500 servo.set_servo (18, servo_pos) # sleep 150ms time.sleep (0.150) return "ok" @app.route ("/home") def home ( ): global servo_pos print "Home" servo_pos = 1250 servo.set_servo (18, servo_pos) # sleep 150ms time.sleep (0.150) return "ok" @app.route ("/panfull_lt") def panfull_lt ( ): global servo_pos print "Pan full left" servo_pos = 500 servo.set_servo (18, servo_pos) # sleep 150ms time.sleep (0.150) return "ok" @app.route ("/panfull_rt") def panfull_rt ( ): global servo_pos print "Pan full right" servo_pos = 2500 servo.set_servo (18, servo_pos) # sleep 150ms time.sleep (0.150) return "ok" @app.route ("/speed_low") def speed_low ( ): global speed_offset, last_direction, turn_tm_offset speed_offset = 42 turn_tm_offset = 0.001 # Update current direction to get new speed if last_direction == 0: go_forward ( ) if last_direction == 1: go_backward ( ) # sleep 150ms time.sleep (0.150) return "ok" @app.route ("/speed_mid") def speed_mid ( ): global speed_offset, last_direction, turn_tm_offset speed_offset = 84 turn_tm_offset = 0.166 # Update current direction to get new speed if last_direction == 0: go_forward ( ) if last_direction == 1: go_backward ( ) # sleep 150ms time.sleep (0.150) return "ok" @app.route ("/speed_hi") def speed_hi ( ): global speed_offset, last_direction, turn_tm_offset speed_offset = 126 turn_tm_offset = 0.332 # Update current direction to get new speed if last_direction == 0: go_forward ( ) if last_direction == 1: go_backward ( ) # sleep 150ms time.sleep (0.150) return "ok" @app.route ("/continuous") def continuous ( ): global run_time print "Continuous run" run_time = 0 # sleep 100ms time.sleep (0.100) return "ok" @app.route ("/mid_run") def mid_run ( ): global run_time print "Mid run" run_time = 0.750 halt ( ) # sleep 100ms time.sleep (0.100) return "ok" @app.route ("/short_time") def short_time ( ): global run_time print "Short run" run_time = 0.300 halt ( ) # sleep 100ms time.sleep (0.100) return "ok" # # Motor drive functions # def go_forward ( ): global speed_offset if speed_offset != 42: roboclaw.write (chr (1 + speed_offset)) roboclaw.write (chr (128 + speed_offset)) else: roboclaw.write (chr (127 - speed_offset)) roboclaw.write (chr (255 - speed_offset)) def go_backward ( ): global speed_offset if speed_offset != 42: roboclaw.write (chr (127 - speed_offset)) roboclaw.write (chr (255 - speed_offset)) else: roboclaw.write (chr (1 + speed_offset)) roboclaw.write (chr (128 + speed_offset)) def go_left ( ): global speed_offset if speed_offset != 42: roboclaw.write (chr (127 - speed_offset)) roboclaw.write (chr (128 + speed_offset)) else: roboclaw.write (chr (1 + speed_offset)) roboclaw.write (chr (255 - speed_offset)) def go_right ( ): global speed_offset if speed_offset != 42: roboclaw.write (chr (1 + speed_offset)) roboclaw.write (chr (255 - speed_offset)) else: roboclaw.write (chr (127 - speed_offset)) roboclaw.write (chr (128 + speed_offset)) def halt ( ): roboclaw.write (chr (0)) if __name__ == "__main__" : app.run (host = '0.0.0.0', port = 80, debug = True)
If you do not want or need debug information from Flask, set debug to 'false' on the app.run line.
if __name__ == "__main__" :
app.run (host = '0.0.0.0', port = 80, debug = False)
You can also change the port that the Flask http server listens on here as well.
Step 14: Using Other Hardware
If you want to use other hardware, like another type of SBC (Single Board Computer) you should have little issues getting Python and Flask running on other boards like the Beagle Bone, PCDuino etc... You will have to change the code to match the GPIO layout and use the servo driving capabilities of the new board.
To use another type motor driver, you just need to modify the go_forward, go_backward, go_left, go_right and halt functions to do what ever the replacement motor driver needs to make the motor do that particular function.
19 Discussions
i did the tutorial, swapped SD card to test both in my zero, pi-B and my pi3 and im getting the same error on em all, i did run the Fix described in comments, but nothing helped, any suggestions??:
pi@sPIrit:~/rover $ sudo ./restrover.py
Using hardware: PWM
PW increments: 10us
Initializing channel 0...
Traceback (most recent call last):
File "./restrover.py", line 35, in <module>
servo.set_servo (18, servo_pos)
File "build/bdist.linux-armv6l/egg/RPIO/PWM/__init__.py", line 212, in set_servo
File "build/bdist.linux-armv6l/egg/RPIO/PWM/__init__.py", line 97, in init_channel
RuntimeError: rpio-pwm: Page 0 not present (pfn 0xa10000000002608c)
Is there a way to make the script start when the RasPi boots up?
You need to add it to init.d.
Set up a script added to init.d like the in the following link: or
Hi guys, great project! I've been having fun trying to put it together. I have most of it working now with the exception of the web camera interface. Is there any thing special I need to do to get it going with the usb camera rather than the raspi camera module?
Thanks!
Sorry, the link was broken that I had. I have it fixed now. Follow it here:...
This shows how to get a WC running with Motion on a RPI. There are others on the web as well. Just change the link in the rovers index.html to point to your RPI streaming setup.
How do I change the webserver port off the 80 it is currently set at? I have another RPi that is controlling my Garage Door and it is using Apache I was able to use this info to change the port . this does not work on this project.
The port is set here:
if __name__ == "__main__" :
app.run (host = '0.0.0.0', port = 80, debug = True)
I was able to change the port using WinSCP, Clicking the restrover.py and looking for the information you posted above.
This did change and worked on internal IP . However after configuring port forwarding through my router, i still can not access with my DDNS. Any suggestions?
Sorry. I can be no help with you router :)
Great Project however I am having an issue. Im getting this when installing step "sudo pip install RPIO" This is error i get:
copying source/RPIO/__init__.py -> build/lib.linux-armv7l-2.7/RPIO
copying source/RPIO/Exceptions.py -> build/lib.linux-armv7l-2.7/RPIO
creating build/lib.linux-armv7l-2.7/RPIO/PWM
copying source/RPIO/PWM/__init__.py -> build/lib.linux-armv7l-2.7/RPIO/PWM
running build_ext
building 'RPIO._GPIO' extension
creating build/temp.linux-armv7l-2.7
creating build/temp.linux-armv7l-2.7/source
creating build/temp.linux-armv7l-2.7/source/c_gpio
gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -I/usr/include/python2.7 -c source/c_gpio/py_gpio.c -o build/temp.linux-armv7l-2.7/source/c_gpio/py_gpio.o
source/c_gpio/py_gpio.c:28:20: fatal error: Python.h: No such file or directory
compilation terminated.
error: command 'gcc' failed with exit status 1
----------------------------------------
Command "/usr/bin/python -c "import setuptools, tokenize;__file__='/tmp/pip-build-v1_9ya/RPIO/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-U99v_f-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-build-v1_9ya/RPIO
pi@raspberrypi ~ $
The result is causing this error when running script: " sudo python restrover.py" i think. Here is the error message :
pi@raspberrypi ~/rover $ sudo python restrover.py
Traceback (most recent call last):
File "restrover.py", line 13, in <module>
from RPIO import PWM
ImportError: No module named RPIO
pi@raspberrypi ~/rover $
Any idea what i did wrong?
Im using the newest Rpi.
Hi,
That would cause the issue.
Try this one to install the Python gpio:
sudo apt-get install python-dev python-rpi.gpio
Wait, I see now I think.
Do this sudo apt-get install python-dev
and then this: sudo pip install RPIO
See: for more information.
Thanks for the reply. I really want to get this working. Performing the Sudo apt-get install python-dev did work. However now Im getting this error.
pi@raspberrypi ~/rover $ sudo python restrover.py
Traceback (most recent call last):
File "restrover.py", line 13, in <module>
from RPIO import PWM
File "/usr/local/lib/python2.7/dist-packages/RPIO/__init__.py", line 115, in <module>
import RPIO._GPIO as _GPIO
SystemError: This module can only be run on a Raspberry Pi!
pi@raspberrypi ~/rover $
Have a look at this thread:
Looks like a RPI 2 issue. I only have older Pi's. The page does talk of how to fix it.
THAT WORKED!!! Thank you for your help!
Good work Jscottb
Thanks!
awesome ,great job ?
Thank you very much! | https://www.instructables.com/id/Web-controlled-rover/ | CC-MAIN-2018-39 | refinedweb | 3,429 | 75 |
A class defined within another class is known as Nested class. The scope of the nested class is bounded by the scope of its enclosing class.
Syntax:
class Outer{ //class Outer members class Inner{ //class Inner members } } //closing of class Outer
If you want to create a class which is to be used only by the enclosing class, then it is not necessary to create a separate file for that. Instead, you can add it as "Inner Class"
If the nested class i.e the class defined within another class, has static modifier applied in it, then it is called as static nested class. Since it is, static nested classes can access only static members of its outer class i.e it cannot refer to non-static members of its enclosing class directly. Because of this restriction, static nested class is rarely used.
Non-static Nested class is the most important type of nested class. It is also known as Inner class. It has access to all variables and methods of Outer class including its private data members and methods and may refer to them directly. But the reverse is not true, that is, Outer class cannot directly access members of Inner class. Inner class can be declared private, public, protected, or with default access whereas an Outer class can have only public or default access.
One more important thing to notice about an Inner class is that it can be created only within the scope of Outer class. Java compiler generates an error if any code outside Outer class attempts to instantiate Inner class directly.
A non-static Nested class that is created outside a method is called Member inner class.
A non-static Nested class that is created inside a method is called local inner class. If you want to invoke the methods of local inner class, you must instantiate this class inside the method. We cannot use private, public or protected access modifiers with local inner class. Only abstract and final modifiers are allowed.
class Outer { public void display() { Inner in=new Inner(); in.show(); } class Inner { public void show() { System.out.println("Inside inner"); } } } class Test { public static void main(String[] args) { Outer ot = new Outer(); ot.display(); } }
Inside inner
class Outer { int count; public void display() { for(int i=0;i<5;i++) { //Inner class defined inside for loop class inner { public void show() { System.out.println("Inside inner "+(count++)); } } Inner in=new Inner(); in.show(); } } } class Test { public static void main(String[] args) { Outer ot = new Outer(); ot.display(); } }
Inside inner 0 Inside inner 1 Inside inner 2 Inside inner 3 Inside inner 4
class Outer { int count; public void display() { Inner in = new Inner(); in.show(); } class Inner { public void show() { System.out.println("Inside inner "+(++count)); } } } class Test { public static void main(String[] args) { Outer ot = new Outer(); Outer.Inner in = ot.new Inner(); in.show(); } }
Inside inner 1
A class without any name is called Annonymous class.
interface Animal { void type(); } public class ATest { public static void main(String args[]) { //Annonymous class created Animal an = new Animal() { public void type() { System.out.println("Annonymous animal"); } }; an.type(); } }
Annonymous animal
Here a class is created which implements Animal interace and its name will be decided by the compiler. This annonymous class will provide implementation of type() method. | https://www.studytonight.com/java/nested-classes.php | CC-MAIN-2020-05 | refinedweb | 555 | 56.86 |
You can't do much with object-oriented programming without mastering access modifiers but it is still worth gathering the basic ideas here.
There are four access modifiers you can use to control which classes have access to properties and methods of another class:
It is important to realise that private and protected are not code security measures. A private or protected method can be decompiled just as easily as a public method and reflection (when the code is running with full trust) can be used to call a private or protected method from any code.
This confusion between security and design is a general problem in considering any mechanism of inheritance control. It is tempting to always deny access to any code that you some how feel is proprietary and belongs to you even if it actually doesn't succeed in denying access and only makes it more difficult to get further value from your work.
This is about all there is to know about basic access modifiers together with the simple fact that no derived class can have a freer access modifier than its base class. If this wasn't the case what was supposed to be private could become public via a derived class.
As far as encapsulation is concerned the use of the modifiers is simple enough. Any property or method that is part of the class's interface with the outside world should be public. Public types can be accessed by the outside world and by classes derived from the class that they are declared in. Any internal mechanism that is strictly to be hidden from the outside world should be declared as either protected or private.
The big problem is when should you use protected and when private?
The answer is often based on the simple question – do you want a programmer working on a class derived from your class to have access to its inner workings? This is a difficult question. Why would you refuse the use of a method say to a class's "end user" but not to a programmer who is creating a new class from your class? You might feel that a programmer who is capable of creating a new class is probably more technically capable of treating your method correctly – but this is an unlikely assumption.
In most cases the reason why you make a resource protected is because you, the implementer of the class, wants to derive new classes from it. After all, if you can't trust yourself to use resources you have created who can you trust? However, if you do this, notice that you also allow other programmers to do the same. It is also probably unreasonable to extend this privilege to yourself on the grounds that you can be trusted. In a few months' time the code in question will look as alien to you as it does to another programmer.
The decision about which code should be protected and which private should really relate to the code and not the coder. Public methods should define the external world's view of the class - they should define the way the class is used. In the same way private and protected methods should define the inner workings of the class. They are the resources that the class uses to get the job done.
To decide which should be private and which protected the important question is which of the methods are general tools that a derived class might also need to use as part of its extended implementation. In most cases any resource that a base class uses is probably needed by a class that extends it and so protected is most often appropriate – even if private is the default.
A simple example should make the situation clearer especially with reference to inheritance. Consider the class:
public class MyClass1{ public void MyMethod1() { Console.WriteLine("Method1"); }}
and a derived class:
public class MyClass2:MyClass1{ public new void MyMethod1() { base.MyMethod1(); Console.WriteLine("Method2"); }}
The overriding method in the derived class can both override and access the base class's method.
If you change MyClass1's MyMethod1 to protected everything still works as before – the overriding method can even remain public, i.e. code that can't call MyClass1's method can call MyClass2's.
If you change MyClass1's MyMethod1 to private, the call in MyClass2's method to the base class MyMethod1 no longer works. The derived class has no access to the parent's private resources. However, the derived class can create a method with the same name and there is no need for the "new" modifier as nothing is being overridden, i.e. the class doesn't inherit a MyMethod1 from the base class.
If you change MyClass1's MyMethod1 to virtual then it cannot be private as the whole point of a virtual method is to allow derived classes to override it dynamically. However you can make the method virtual protected and the derived class can either use the override or new modifier on its method.
If it uses the override modifier then the overriding method cannot be public – only protected as private is ruled out because the method is once again virtual. The reason that the protection level cannot change is that the compiler cannot work out if a reference to MyClass1 e.g.
MyClass1 myObj;
is going to be referencing at run time an object of type MyClass1 or of type MyClass2.
Now consider the method call:
myObj.MyMethod1();
If MyMethod1 is virtual then which method will be called depends on the runtime type of myObj and if the protection levels were different the compiler could not work out at compile time if the method was or was not accessible.
Notice that if the method isn't declared as virtual then this problem doesn't arise because early binding means that no matter what class myObj references the method defined in MyClass1 is called – because that's the declared type of myObj.
For this reason if you declare the derived class's method as new then it can be public, private or protected. This is once again reasonable as the new modifier "kills" the virtual inheritance, i.e. it works as if MyClass1's method wasn't declared as virtual.
<ASIN:0735624305>
<ASIN:0672329905> | http://i-programmer.info/ebooks/deep-c/559-chapter-five.html?start=1 | CC-MAIN-2017-26 | refinedweb | 1,055 | 58.42 |
Module Overview
GroovyWS is taking over GroovySOAP as CXF replaces XFire. The major difference here is that GroovyWS is using Java5 so if you need to stick to 1.4 please continue to use GroovySOAP.
Most of the documentation is adapted from the former GroovySOAP documentation and will improve in the future. I tried to make it as accurate as possible but feel free to report any error.
GroovyWS is Java5 dependent (due to CXF) and has been tested using groovy-1.5.
In order to use GroovyWS, you must ensure that GroovySOAP is not in your classpath (~/.groovy/lib)
Download
Distributions
GroovyWS is available in two packages:
Installing
You just need to place the above mentioned JAR file in your ${user.home}/.groovy/lib directory.
Documentation
Getting Started
The Server
You can develop your web service using a groovy script and/or a groovy class. The following two groovy files are valid for building a web-service.
- MathService.groovy
public class MathService { double add(double arg0, double arg1) { return (arg0 + arg1) } double square(double arg0) { return (arg0 * arg0) } }
- Then the easy part ... no need for comments
import groovyx.net.ws.WSServer def server = new WSServer() server.setNode("MathService", "")
That's all !
The Client
- Oh ... you want to test it ... two more lines.
import groovyx.net.ws.WSClient def proxy = new WSClient("", this.class.classLoader) def result = proxy.add(1.0 as double, 2.0 as double) assert (result == 3.0) result = proxy.square(3.0 as double) assert (result == 9.0)
- You're done!
Complex types
The Server
Let say we have a server that manage a library in which you can add a book, find a book and get all the books. The server code will probably look like this
- BookService.groovy
class BookService { private List allBooks = new ArrayList() Book findBook(String isbn) { for (book in allBooks) { if (book.isbn == isbn) return book } return null } void addBook(Book book) { allBooks.add(book) } Book[] getBooks() { return (Book[])allBooks.toArray(new Book[allBooks.size()]) } }
- with the class Book being something like that.
Book.groovy
class Book { String author String title String isbn }
To ignore the metaClass property a custom type mapping must be defined (for details refer to Aegis Binding).
Book.aegis.xml
<?xml version="1.0" encoding="UTF-8"?> <mappings xmlns: <mapping name="sample:Book"> <property name="metaClass" ignore="true"/> </mapping> </mappings>
However, if you compile custom data types from Java the bytecode won't contain a metaClass property:
import groovyx.net.ws.WSClient def proxy = new WSClient("", this.class.classLoader) def books = proxy.getBooks() for (book in books) println book def book = proxy.create("defaultnamespace.Book") book.title = "Groovy in Action" book.author = "Dierk" book.isbn = "123" proxy.addBook(book) def bks = proxy.getBooks() println bks.books[0].isbn
Iterating over the books is slightly more complicated since SOAP wrap the arrays in an element (in our case ArrayOfBook). Therefore you have to extract a field from that element. In our case:
def aob = proxy.getBooks() for (book in aob.books) println book.name ;-
Currency rate calculator
There exist a lot of web-services available for testing. One which is pretty easy to evaluate is the currency rate calculator from webservicex.net.
Here is a small swing sample that demonstrate the use of the service. Enjoy !
import groovy.swing.SwingBuilder import groovyx.net.ws.WSClient proxy = new WSClient("", this.class.classLoader) def currency = ['USD', 'EUR', 'CAD', 'GBP', 'AUD', 'SGD'] def rate = 0.0 swing = new SwingBuilder() refresh = swing.action( name:'Refresh', closure:this.&refreshText, mnemonic:'R' ) }
TerraServer-USA by Microsoft
TerraServer supports a Tiling Web Service that enables you to build applications that integrate with USGS imagery found on their site. Here is a sample of what you can achieve.
import groovyx.net.ws.WSClient; def proxy = new WSClient("", this.class.classLoader) // Create the Place object def place = proxy.create("com.terraserver_usa.terraserver.Place") // Initialize the Place object place.city = "mountain view" place.state = "ca" place.country = "us" // Geocode the place def result = proxy.ConvertPlaceToLonLatPt(place) println "Longitude: ${result.lon}" println "Latitude: ${result.lat}"
will give:
Longitude: -122.08000183105469 Latitude: 37.400001525878906
Developers
Guillaume Alleon
Source Control
Community
Articles
A nice article from Geertjan's blog with several examples: | http://groovy.codehaus.org/GroovyWS | crawl-001 | refinedweb | 701 | 62.44 |
Header image: Ressence Type 5 Tilt by Romain Guy.
This blog series is focused on stability and performance monitoring of Android apps in production. Last week, I wrote about using process importance to determine why an app was started.
To track cold start time, we need to know when the app started. There are many ways to do that and this blog evaluates different approaches.
As a reminder, I already established in Android Vitals - What time is it? that I would use
SystemClock.uptimeMillis() to measure time intervals.
Application.onCreate()
The simplest approach is to capture the time at which Application.onCreate() is called.
class MyApp : Application() { var applicationOnCreateMs: Long = 0 override fun onCreate() { super.onCreate() applicationOnCreateMs = SystemClock.uptimeMillis() } }
ContentProvider.onCreate()
In How does Firebase initialize on Android? we learn that a safe early initialization hook for library developers is ContentProvider.onCreate():
class StartTimeProvider : ContentProvider() { var providerOnCreateMs: Long = 0 override fun onCreate(): Boolean { providerOnCreateMs = SystemClock.uptimeMillis() return false } }
ContentProvider.onCreate() also works for app developers and it's called earlier in the app lifecycle than Application.onCreate().
Class load time
Before any class can be used, it has to be loaded. We can rely on static initializers to store the time at which specific classes are loaded.
We could track the time at which the Application class is loaded:
class MyApp : Application() { companion object { val applicationClassLoadMs = SystemClock.uptimeMillis() } }
In Android Vitals - Diving into cold start waters 🥶, we learnt that on Android P+ the first class loaded is the AppComponentFactory:
@RequiresApi(Build.VERSION_CODES.P) class StartTimeFactory : androidx.core.app.AppComponentFactory() { companion object { val factoryClassLoadMs = SystemClock.uptimeMillis() } }
<?xml version="1.0" encoding="utf-8"?> <manifest xmlns: <!-- Replace AndroidX appComponentFactory. --> <application android: </manifest>
To track start time before Android P, Library developers can rely on the class load time of providers:
class StartTimeProvider : ContentProvider() { companion object { val providerClassLoadMs = SystemClock.uptimeMillis() } }
The class loading order will usually be
AppComponentFactory >
Application >
ContentProvider. That may change if your
AppComponentFactory loads other classes on class load.
Process fork time
We know that every app process starts by being forked from Zygote. On Linux & Android, there's a file called
/proc/[pid]/stat that is readable and contains stats for each process, including the process start time.
Let's check out
man proc, under the
/proc/[pid]/stat section:
(2) comm %s The filename of the executable, in parentheses. This is visible whether or not the executable is swapped out. ... (22) starttime %llu The time the process started after system boot. In kernels before Linux 2.6, this value was expressed in jiffies. Since Linux 2.6, the value is expressed in clock ticks (divide by sysconf(_SC_CLK_TCK)).
/proc/[pid]/stat is a file with one line of text, where each stat is separated by a space. However, the second entry is the filename of the executable, which may contain spaces, so we'll have to jump past it by looking for the first
) character. Once we've done that, we can split the remaining string by spaces and pick the 20th entry at index 19.
object Processes { fun readProcessForkRealtimeMillis(): Long { val myPid = android.os.Process.myPid() val ticksAtProcessStart = readProcessStartTicks(myPid) // Min API 21, use reflection before API 21. // See val ticksPerSecond = Os.sysconf(OsConstants._SC_CLK_TCK) return ticksAtProcessStart * 1000 / ticksPerSecond } // Benchmarked (with Jetpack Benchmark) on Pixel 3 running // Android 10. Median time: 0.13ms fun readProcessStartTicks(pid: Int): Long { val path = "/proc/$pid/stat" val stat = BufferedReader(FileReader(path)).use { reader -> reader.readLine() } val fields = stat.substringAfter(") ") .split(' ') return fields[19].toLong() } }
This gives us the realtime for when the process was forked. We can convert that to uptime:
val forkRealtime = Processes.readProcessForkRealtimeMillis() val nowRealtime = SystemClock.elapsedRealtime() val nowUptime = SystemClock.uptimeMillis() val elapsedRealtime = nowRealtime - forkRealtime val forkUptimeMs = nowUptime - elapsedRealtimeMs
This gives us the real process start time. However, in Android Vitals - Diving into cold start waters 🥶, we concluded that app cold start monitoring should start when ActivityThread.handleBindApplication() is called, because app developers have little influence on the time spent before that.
There's another downside: the process can be forked long before the application starts specializing in
ActivityThread.handleBindApplication(). I measured the time from fork to
Application.onCreate() in a production app: while the median time was 350ms, the max was 4 days, and the interval was greater than 1 minute for 0.5% of app starts.
Bind application time
One of the first things ActivityThread.handleBindApplication() does is call Process.setStartTimes():
public class ActivityThread { private void handleBindApplication(AppBindData data) { // Note when this process has started. Process.setStartTimes( SystemClock.elapsedRealtime(), SystemClock.uptimeMillis() ); ... } }
The corresponding timestamp is available via Process.getStartUptimeMillis():
Return the SystemClock#uptimeMillis() at which this process was started.
Sounds great, right? Well, it was great, until API 28. I measured the time from bind application to
Application.onCreate() in a production app. While the median time was 250ms, on API 28+ the max was 14 hours, and the interval was greater than 1 minute for 0.05% of app starts.
I also found similar issues with the time from class loading of
AppComponentFactory to
Application.onCreate(): greater than 1 minute for 0.1% of app start on API 28+.
This can't be due to the device sleeping, since we only measure time intervals using
SystemClock.uptimeMillis(). I haven't been able to figure out exactly what's going on here, it looks like sometimes bind application starts then halts mid way and the actual app start is much later.
Conclusion
Here's how we can most accurately measure the app start time when monitoring cold start:
- Up to API 24: Use the class load time of a content provider.
- API 24 - API 28: Use Process.getStartUptimeMillis().
- API 28 and beyond: Use Process.getStartUptimeMillis() but filter out weird values (e.g. more than 1 min to get to
Application.onCreate()) and fallback to the time
ContentProvider.onCreate()is called.
Discussion (5)
Did you do any analysis using a timestamp from the object initialization of
AppComponentFactoryrather than the static initialization? I'm wondering if there might be something that causes class loading to happen much before the actual application instantiation.
Thanks for the series! It's been extremely helpful. Question on the conclusion again. For APIs before 24, if we don't need Context, could we use an earlier time than the class load time of a content provider, e.g. class load of Application?
Content providers are loaded before the Application class. If you leverage the application class then you're likely missing some of the init time.
Speaking of measuring time, is there a way to measure all functions that are being called (especially of various libraries the project uses), till some specific function call, and then look at what could be optimized to make the app-boot better?
You can either use tooling that modifies the bytecode to add timing measurements everywhere, or you can turn capture a profiling trace via the Debug class. | https://dev.to/pyricau/android-vitals-when-did-my-app-start-24p4 | CC-MAIN-2021-31 | refinedweb | 1,151 | 51.34 |
Using Heist and Happstack
In the course of playing around with some of the newer Haskell web application stuff recently, I found that I really like the combination of Happstack and Heist. However, there are a few challenges in getting the two to play together well, so I thought I’d write up a description of how to do it.
Step 1: Getting the dependencies right
When creating your cabal file, you’ll generally need at least the following packages:
- base (of course)
- happstack-server, for the Happstack bits
- heist, for the Heist bits
- bytestring, since it’s used extensively with Heist
- mtl, which is used in Happstack
- monads-fd, which is used in Heist
Those last two make for a rather unhappy combination, and are the subject of the next step.
Step 2: Making mtl and monads-fd play nicely
This took some figuring out. Basically, a good bit of Heist uses the MonadIO class from monads-fd. At the same time, Happstack uses the MonadIO class from mtl. Left to their own devices, this will lead to a lot of errors that ServerPartT IO is not an instance of MonadIO. Of course it is… just not that MonadIO.
Here’s the code I eventually wrote to fix it. You’ll need a number of GHC extensions to do this.
{-# LANGUAGE PackageImports #-} {-# LANGUAGE FlexibleInstances #-} import "mtl" Control.Monad.Trans {- Needed because Heist uses transformers rather than the old mtl package. -} import qualified "monads-fd" Control.Monad.Trans as TRA instance TRA.MonadIO (ServerPartT IO) where liftIO = liftIO
The language extension PackageImports allows us to import modules from a specific package. In general, it’s a bad idea if you can avoid it as it can lead to fragile code… but since here we are facing the challenge of making two packages work together, there’s not another choice. We use this extension to import both the mtl and monads-fd versions of the MonadIO type class. The second language extension, FlexibleInstances, relaxes some of the Haskell98 rules regarding what kinds of instances are allowed. It is needed for the instance declaration on the last line there, and it’s pretty harmless.
Once we’ve got both versions of MonadIO and its member, liftIO, imported properly, we simply write an instance declaration making ServerPartT IO an instance of the monads-fd version of MonadIO (copying the actual behavior straight from the mtl version). Voila, problem solved.
Step 3: Porting the glue code
If you’ve worked through the Snap tutorial, you know that you can start a new Snap project by typing ‘snap init’ at the command line, and the command writes a bit of code for you. That code includes a module called “Glue” that’s largely about making Snap and Heist work together. Well, we’ll need the same code for Happstack, and have no automated command to write it for us. Not to worry, though, it’s a piece of cake to port it over, and you can tweak it as you go.
Here’s the very simple application I ended up with that uses Heist in Happstack. This is the full source code; it assumes that underneath the current working directory when the application is run, there’s a directory called “web” containing your templates, and a subdirectory of that called “web/static” containing your static files.
{-# LANGUAGE OverloadedStrings #-} {-# LANGUAGE PackageImports #-} {-# LANGUAGE FlexibleInstances #-} module Main where import Control.Monad (msum, mzero) import Happstack.Server import Happstack.Server.HTTP.FileServe import Text.Templating.Heist import Text.Templating.Heist.TemplateDirectory import Data.ByteString.Char8 (ByteString) import qualified Data.ByteString.Char8 as B import qualified Data.ByteString.Lazy as L import "mtl" Control.Monad.Trans {- Needed because Heist uses transformers rather than the old mtl package. -} import qualified "monads-fd" Control.Monad.Trans as TRA instance TRA.MonadIO (ServerPartT IO) where liftIO = liftIO main :: IO () main = do td <- newTemplateDirectory' "web" emptyTemplateState simpleHTTP nullConf $ msum [ dir "static" $ fileServe [] "web/static", templateServe td, dir "reload" $ nullDir >> templateReloader td, ] templateReloader td = do e <- reloadTemplateDirectory td return $ toResponseBS "text/plain; charset=utf-8" $ L.fromChunks [either B.pack (const "Templates loaded successfully.") e] templateServe td = msum [ nullDir >> render td "index", withRequest (return . rqUri) >>= render td . B.pack ] render td template = do ts <- getDirectoryTS td bytes <- renderTemplate ts template flip (maybe mzero) bytes $ \x -> do return (toResponseBS "text/html; charset=utf-8" (L.fromChunks [x]))
And that’s it! You have a working Happstack application using Heist as a template engine.
Nice!
I have been hoping to get official support for heist into happstack since heist does provide a new style of templates that seems useful.
What do you think would be require for official support ? A ToMessage instance for responses would be nice. It seems like the biggest annoyance is the mtl vs monads-fd support.
Happstack is by no means committed to sticking with mtl. But, I wonder if it is too early for happstack to make the switch? Your workaround seems reasonable in the meantime though..
I honestly don’t think much else is needed. It’s a testament to both packages, I think, that they can be made to work together in around a dozen lines of code, and no added complexity is necessary except for the well-known mtl/transformers issue. By contrast, Michael Snoyman’s persistent package looks tempting as well, but I’ve avoided it so far because the list of dependencies includes hamlet and web-routes-quasi, and I anticipate some hoop-jumping there.
I’m working on a somewhat involved web site now, though, that I intend to write in Happstack and Heist, so I’ll be in a better position to answer that question in about two to three weeks.
To make the instance a bit more general you can use:
import qualified “monads-fd” Control.Monad.Trans as TRA
instance (MonadIO m) => TRA.MonadIO (ServerPartT m)
where liftIO = liftIO
Hello,
I just uploaded a new stable happstack, and patched the development version of happstack to use mtl-2:
My understanding is that mtl-2 is what used to be called monads-fd, and monads-fd is now a dummy package that imports mtl-2.
So, I think that means you should be able to use heist with happstack without having to do any MonadIO hackery provided you rebuild happstack against mtl-2 ?
– jeremy
I packaged this up and put it on hackage. (I put your name in the copyright).
Thanks! | https://cdsmith.wordpress.com/2010/10/05/using-heist-and-happstack/?like=1&source=post_flair&_wpnonce=0772592eec | CC-MAIN-2015-27 | refinedweb | 1,077 | 64.2 |
caulk 0.1-1
Python tool for diagnosing memory leaks.
A SmartFile Open Source project. Read more about how SmartFile uses and contributes to Open Source software.
Introduction
This package consists of a library and command line tool. The library allows you to dump the Python objects in memory to a dump file.
The command line tool is used to examine the dump file contents.
Creating a dump file.
To add memory dump capability to one of your Python applications, import the caulk library and register a signal handler.
import caulk caulk.handler()
A number of kwargs can control the operation of the signal handler.
- signum - The signal to react to, signal.SIGUSR1 by default.
- path - The path to which to write the dump file /var/tmp by default.
- name - The name of the dump file, ‘caulk’ by default..
A dump file’s name will be: ‘{0}-{1}-{2}.dump’.format(name, pid, time).
To generate a dump file, use kill:
# kill -usr1 <pid>
Where <pid> is the pid of your running application.
You can also use the low-level API to generate a dump file directly instead of relying on a signal handler.
import caulk caulk.dump('/var/tmp/my.dump')
Examining a dump file.
To examine the dump file, use the caulk command.
# caulk --classes /var/tmp/caulk-1025-1346255743.435316.dump count total average min/max class ---------------------------------------------------------------------- 4 256 64 64/64 unittest.suite.TestSuite 1 64 64 64/64 unittest.runner._WritelnDecorator 1 64 64 64/64 site._Helper ...
For more information on using the caulk command, see it’s help.
# caulk --help
- Author: Ben Timby
- Download URL:
- License: MIT
- Categories
- Package Index Owner: Ben.Timby
- DOAP record: caulk-0.1-1.xml | https://pypi.python.org/pypi/caulk/0.1-1 | CC-MAIN-2018-13 | refinedweb | 287 | 70.9 |
This is a
playground to test code. It runs a full
Node.js environment and already has all of
npm’s 400,000 packages pre-installed, including
baret with all
npm packages installed. Try it out:
require()any package directly from npm
awaitany promise instead of using callbacks (example)
This service is provided by RunKit and is not affiliated with npm, Inc or the package authors.
[ ≡ | Tutorial | Reference ]
Baret is a library that allows you to embed Bacon.js observables into React Virtual DOM. Embedding observables into VDOM has the following benefits:
reffor component lifetime, leading to more concise code.
Using Baret couldn't be simpler. You just
import React from "baret" and you
are good to go.
baret-liftattribute
fromBacon(observableVDOM)
fromClass(Component)
$refattribute
To use Baret, you simply import it as
React:
import React from "baret"
and you can then write React components:
const oncePerSecond = Bacon.interval(1000).toProperty() const Clock = () => <div> The time is {oncePerSecond.map(() => new Date().toString())}. </div>
with VDOM that can have embedded Bacon.js observables.
NOTE: The result, like the
Clock above, is just a React component. If
you export it, you can use it just like any other React component and even in
modules that do not import
baret.
baret-liftattribute baret-lift {...props}/>
to be able to use
Link1 with
embedded Bacon.js observables:
<Link1 href="" ref={elem => elem && elem.focus()}> {Bacon.sequentially(1000, [3, 2, 1, "Boom!"])} </Link1>
Note that the
ref attribute is only there as an example to contrast
with
$ref.}) => fromBacon(ifte(choice, <True/>, <False/>))> {ifte(choice, <True/>, <False/>)} </div>
fromClass(Component)
fromClass allows one to lift a React component.
For example:
import * as RR from "react-router" import {fromClass} from "baret" const Link2 = fromClass(RR.Link)
WARNING: A difficulty with lifting components is that you will then need to
use the
$ref attribute, which is not necessary when
using
baret-lift to lift an element.
$refattribute
The
$ref attribute on an element whose component is lifted using
fromClass
<Link2 href="" $ref={elem => elem && elem.focus()}> {Bacon.sequentially(1000, [3, 2, 1, "Boom!"])} </Link2>
does the same thing as the ordinary
JSX
ref attribute:
JSX/React treats
ref as a special case and it is not passed to components, so
a special name had to be introduced for it. | https://npm.runkit.com/baret | CC-MAIN-2020-10 | refinedweb | 382 | 57.47 |
Wiki
glLoadGen / Common_Extension_Files
Extension files are a good mechanism for collating useful sets of extensions for easy referencing. The LoadGen system comes with a small library of pre-built extension files which you may find useful.
To include these names from the command-line, you should use the
-stdext option, instead of
-extfile. The difference is where they search;
extfile is always relative to the directory you're currently in, while
stdext will search the
extfiles/ directory of where LoadGen is stored.
To include these names from an extension file, you should use
#include <> instead of
#include "", for the same reasons as above.
All of these extension files are located in the directory
extfiles of LoadGen's directory. However, you don't need to prefix the name with
extfiles/ Therefore, any inclusion of them should be as this:
-stdext=extfiles/<include filename> or
#include <extfiles/<include filename>>.
Here is a list of the files and what they include:
gl_ubiquitous.txt:
For the kinds of extensions that should be core OpenGL, but aren't for IP reasons. Namely, anisotropic filtering, and the extensions needed for S3TC.
gl_core_post_3_3.txt:
Core extensions that are widely available on OpenGL 3.3, but aren't part of GL 3.3 itself. These are for post-3.3 API improvements, like internalformat_query, shading_language_420pack, separate_shader_objects, and so forth.
gl_plat_3_3.txt:
Vendor-specific extensions that are implemented by multiple vendors for 3.x-class hardware. Things like NV_texture_barrier.
gl_AMD_3_3.txt:
AMD's HD-2xxx, 3xxx, and 4xxx line of hardware all support GL 3.3. However, they also support some features of 4.x-class hardware via non-core extensions. This file includes those extensions (transform_feedback2/3, draw_buffers_blend, etc).
gl_macosx_3_2.txt:
All of the extensions allowed by core 3.2 profiles in MacOSX, as of version MacOSX 10.8.
wgl_common.txt:
Commonly useful non-vendor-specific WGL extensions. The basic stuff: getting extensions_string, create_context, swap_control, various pixel-format extensions, etc.
wgl_AMD.txt:
Useful AMD vendor WGL extensions.
wgl_NV.txt:
Useful NVIDIA vendor WGL extensions.
glx_common.txt:
Commonly useful non-vendor-specific GLX extensions. The basic stuff: fbconfig_float, framebuffer_sRGB, multisample, etc.
Updated | https://bitbucket.org/alfonse/glloadgen/wiki/Common_Extension_Files | CC-MAIN-2019-18 | refinedweb | 353 | 51.65 |
We’ve updated our SDKs, and this code is now deprecated.
Good news is we’ve written a comprehensive guide to building a multiplayer game. Check it out!
In my quest to change the world, I’ve been experimenting with an HTML5 game engine to push the limits of the browser as a platform for serious gaming. melonJS has been my tool of choice because it’s lightweight, runs well on mobile, and is very easy to use. Starting last year, I became co-developer on the project. One question that always comes up on the melonJS forum is the best way to use Node.js/socket.io to build multiplayer games. In this article, I’ll be using PubNub, but many of the techniques can be applied to socket.io as well.
For this experiment, I’ll start with the platformer demo that ships with the melonJS 0.9.9 source code, and transform it into a multiplayer HTML5 game with just a few extra lines of code. And all without any servers! This is only possible with the power provided by PubNub.
Want to see the end result? Check it out here. I’ll walk you through how to add multiplayer support to your own game below:
Download melonJS
First step, clone the git repository and checkout the 0.9.9 tag:
$ git clone $ cd melonJS $ git checkout 0.9.9
Next, you’ll want to follow the build instructions to build the library. And you can test the vanilla platformer demo by launching an HTTP server with Python:
$ python -m SimpleHTTPServer
Now visit the URL in your favorite web browser:
It’s a very simple game demo, with a handful of enemies and two maps. However, we want multiplayer support as well. What I started with is a simple module to handle the multiplayer communications..
mp.js
var Multiplayer = Object.extend({ init : function (new_player) { this.pubnub = PUBNUB.init({ publish_key : "PUBLISH KEY HERE", subscribe_key : "SUBSCRIBE KEY HERE" }); this.new_player = new_player; // Record my UUID, so I don't process my own messages this.UUID = this.pubnub.uuid(); // Listen for incoming messages this.pubnub.subscribe({ channel : "PubNub-melonJS-demo", message : this.handleMessage.bind(this) }); }, handleMessage : function (msg) { // Did I send this message? if (msg.UUID === this.UUID) return; // Get a reference to the object for the player that sent // this message var obj = me.game.getEntityByName(msg.UUID); if (obj.length) { obj = obj[0]; } else { var x = obj.pos && obj.pos.x || 50; var y = obj.pos && obj.pos.y || 50; obj = this.new_player(x, y); obj.name = msg.UUID; } // Route message switch (msg.action) { case "update": // Position update obj.pos.setV(msg.pos); obj.vel.setV(msg.vel); break; // TODO: Define more actions here } }, sendMessage : function (msg) { msg.UUID = this.UUID; this.pubnub.publish({ channel : "PubNub-melonJS-demo", message : msg }); } });
This class has a constructor and two methods; the constructor takes one callback, and the
sendMessage() method is the one we’ll be using to send game state updates. This module also does some useful things like creating new player objects, and handling player position updates. I placed this file (mp.js) into the
platformer directory, and included it within index.html (along with pubnub-3.4.5-min.js)
Creating a new Multiplayer object
To initialize the Multiplayer object, I added a few lines after the level has been loaded, around line 104:
// Instantiate the Multiplayer object game.mp = new Multiplayer(function (x, y) { // Create a new player object var obj = me.entityPool.newInstanceOf("mainplayer", x, y, { spritewidth : 72, spriteheight : 98, isMP : true }); me.game.add(obj, 4); me.game.sort(); return obj; });
This creates the object, placing a reference into the
game namespace as
game.mp, and passes a callback function that will create new player objects when we receive messages from other players we haven’t seen before.
That
isMP : true line is important! It will be used later to determine whether the player object is Keyboard-controlled, or controlled by messages from the network.
Side note: to make testing easier, you can disable the “automatic pause” feature when navigating away from the browser window. I added the following line just before the call to
me.video.init() in main.js:
me.sys.pauseOnBlur = false;
Turning the PlayerEntity object into a Multi-PlayerEntity Object
Now we’re ready to hack the PlayerEntity object to work in a multiplayer environment, sending position updates, and ignoring the keyboard input for the
isMP entities. Starting in entities.js at line 25, I added two new properties:
this.isMP = settings.isMP; this.step = 0;
Then I changed the following lines to be conditional on the value of the
isMP property. The viewport follow and key bindings should be skipped if the entity is a multiplayer object:
if (!this.isMP) { // set the display around our position /* ... snip */ // enable keyboard /* ... snip */ }
The original code has been snipped from the example above, but it should be pretty obvious what needs to be changed here.
In the
PlayerEntity.update() method, there are a few things that also need to be made conditional on the value of
isMP. This first checks the key status:
if (!this.isMP) { if (me.input.isKeyPressed('left')) { /* ... snip */ } if (me.input.isKeyPressed('jump')) { /* ... snip */ } }
There’s also a call to
me.game.viewport.fadeIn() that reloads the level when the player falls into a hole. We could make that conditional too, if we don’t want to reload the level when other players fall in.
And finally, there’s a comment at the end of the method about checking if the player has moved. This is the perfect hook for sending out player position updates to other players! I added the following code just before the call to
this.parent():
if (this.vel.x !== 0) this.flipX(this.vel.x < 0); if (!this.isMP) { // Check if it's time to send a message if (this.step == 0) { game.mp.sendMessage({ action : "update", pos : { x : this.pos.x, y : this.pos.y }, vel : { x : this.vel.x, y : this.vel.y } }); } if (this.step++ > 3) this.step = 0; }
The first two lines will fix the “direction” of the player object when it is updated by a message from the network.
The rest contains a basic counter to prevent sending messages too fast, and the final message publish that other players will receive.
Play Our Multiplayer HTML5 Games Online!
The final demo can be played online now! And you can also have a peek at the full patch here. A much better approach would be separating control logic entirely from the entity. But in this case, the demo serves its purpose. Maybe next time, we can work on synchronizing more of the game state, like enemy positions and individual score counters!
Sign up for free and use PubNub to power multiplayer games! | https://www.pubnub.com/blog/lightweight-multiplayer-html5-games-with-pubnub-and-melonjs/ | CC-MAIN-2021-31 | refinedweb | 1,140 | 60.41 |
29 October 2010 07:24 [Source: ICIS news]
TOKYO (ICIS)--Japan’s Mitsubishi Gas Chemical has posted a first-half net profit of yen (Y)6.4bn ($79m) reversing a net loss of Y2.4bn in the year-ago period on increased sales volumes and higher market prices, it said on Friday.
Operating profit for the six months to 30 September was Y12.6bn, reversing an operating loss of Y1.10bn year on year, while sales were up 26% to Y227.1bn from Y180.8bn, the chemicals producer added in a statement.
In the aromatic chemicals segment, sales of specialty aromatic chemicals including meta-xylenediamine increased overseas, while earnings of purified isophthalic acid were low in ?xml:namespace>
As a result, the aromatic chemicals segment posted a first-half operating loss of Y350m, against an operating loss of Y4.55bn in the year-ago period, the producer added.
Sales in the segment totaled Y55bn, up 26% from Y43.5bn.
($1 = Y81.1)
To discuss issues facing the chemical industry go to ICIS connectTo | http://www.icis.com/Articles/2010/10/29/9405659/japans-mitsubishi-gas-chemical-swings-to-h1-net-profit-of-79m.html | CC-MAIN-2014-10 | refinedweb | 173 | 59.5 |
I have the following problem. In my code I am getting at one point a list which does look like the following example:
['-0---110', '--1--110', '01---100', '1--101-0', '10-1-100',...., '10100010']
['-0---110', '--1--110', '01---100', '1--101-0', '10-1-100',...., '10100010'].count(-)
barcounter = numpy.zeros(8)
for x in range(len(list)):
rankcounter[8-1-list[x].count("-")] += 1
print("barcounter", barcounter)
I get the sense of what you were going for, but you'll need to loop through the list. Here is a solution that returns a dictionary mapping from number of bars to frequency of which that many bars appeared in a string:
from collections import defaultdict def get_bar_freq(bar_list): bar_freq = defaultdict(int) # a dictionary that will keep track of frequencies for word in bar_list: num_bars = word.count('-') bar_freq[num_bars] += 1 # increment freq of this many num_bars return bar_freq def main(): bar_list = ['-0---110', '--1--110', '01---100', '1--101-0', '10-1-100', '10100010'] print(get_bar_freq(bar_list)) if __name__ == '__main__': main()
This outputs:
defaultdict(<class 'int'>, {0: 1, 2: 1, 3: 2, 4: 2}) i.e. it is saying 1 string contained 0 bars, 1 string contained 2 bars, 2 strings contained 3 bars, and 2 strings contained 4 bars. | https://codedump.io/share/Yu4fYRMIqRED/1/how-many-specific-characters-in-a-list | CC-MAIN-2017-09 | refinedweb | 211 | 77.67 |
Arena storage pool
You are encouraged to solve this task according to the task description, using any language you may know.
Dynamically.
Contents
Ada[edit]
In Ada the choice of storage pool is controlled by the type of the pointer. Objects pointed by anonymous access types are allocated in the default storage pool. Pool-specific pointer types may get a pool assigned to them:
type My_Pointer is access My_Object;
for My_Pointer'Storage_Pool use My_Pool;
The following example illustrates implementation of an arena pool. Specification:
with System.Storage_Elements; use System.Storage_Elements;
with System.Storage_Pools; use System.Storage_Pools;
package Arena_Pools is
type Arena (Size : Storage_Count) is new Root_Storage_Pool with private;
overriding
procedure Allocate
( Pool : in out Arena;
Address : out System.Address;
Size : Storage_Count;
Alignment : Storage_Count
);
overriding
procedure Deallocate
( Pool : in out Arena;
Address : System.Address;
Size : Storage_Count;
Alignment : Storage_Count
) is null;
overriding
function Storage_Size (Pool : Arena) return Storage_Count;
private
type Arena (Size : Storage_Count) is new Root_Storage_Pool with record
Free : Storage_Offset := 1;
Core : Storage_Array (1..Size);
end record;
end Arena_Pools;
Here is an implementation of the package:
package body Arena_Pools is
procedure Allocate
( Pool : in out Arena;
Address : out System.Address;
Size : Storage_Count;
Alignment : Storage_Count
) is
Free : constant Storage_Offset :=
Pool.Free + Alignment - Pool.Core (Pool.Free)'Address mod Alignment + Size;
begin
if Free - 1 > Pool.Size then
raise Storage_Error;
end if;
Pool.Free := Free;
Address := Pool.Core (Pool.Free - Size)'Address;
end Allocate;
function Storage_Size (Pool : Arena) return Storage_Count is
begin
return Pool.Size;
end Storage_Size;
end Arena_Pools;
The following is a test program that uses the pool:
with Arena_Pools;
use Arena_Pools;
procedure Test_Allocator is
Pool : Arena_Pools.Arena (1024);
type Integer_Ptr is access Integer;
for Integer_Ptr'Storage_Pool use Pool;
X : Integer_Ptr := new Integer'(1);
Y : Integer_Ptr := new Integer'(2);
Z : Integer_Ptr;
begin
Z := new Integer;
Z.all := X.all + Y.all;
end Test_Allocator;
C[edit]
For C, dynamic memory is often used for structures and for arrays when the size of the array is unknown in advance. 'Objects' in C are pretty much structures, with the structure sometimes including a pointer to a virtual dispatch table.
To use dynamic memory, the header for the standard library must be included in the module.
#include <stdlib.h>
Uninitialized memory is allocated using the malloc function. To obtain the amount of memory that needs to be allocated, sizeof is used. Sizeof is not a normal C function, it is evaluated by the compiler to obtain the amount of memory needed.
int *var = malloc(n*sizeof(int));
Typename *var = malloc(sizeof(Typename));
Typename *var = malloc(sizeof var[0]);
Since pointers to structures are needed so frequently, often a typedef will define a type as being a pointer to the associated structure. Once one gets used to the notation, programs are actually easier to read, as the variable declarations don't include all the '*'s.
typedef struct mytypeStruct { .... } sMyType, *MyType;
MyType var = malloc(sizeof(sMyType));
The calloc() function initializes all allocated memory to zero. It is also often used for allocating memory for arrays of some type.
/* allocate an array of n MyTypes */
MyType var = calloc(n, sizeof(sMyType));
MyType third = var+3; /* a reference to the 3rd item allocated */
MyType fourth = &var[4]; /* another way, getting the fourth item */
Freeing memory dynamically allocated from the heap is done by calling free().
free(var);
One can allocate space on the stack using the alloca() function. You do not free memory that's been allocated with alloca
Typename *var = alloca(sizeof(Typename));
An object oriented approach will define a function for creating a new object of a class. In these systems, the size of the memory that needs to be allocated for an instance of the class will often be included in the 'class' record. See
C++[edit]std + size))
{
Erlang[edit]
Given automatic memory handling the only way to ask for memory in Erlang is when creating a process. Likewise the only way to manually return memory is by killing a process. So the pool could be built like this. The unit for memory is word, b.t.w.
-module( arena_storage_pool ).
-export( [task/0] ).
task() ->
Pid = erlang:spawn_opt( fun() -> loop([]) end, [{min_heap_size, 10000}] ),
set( Pid, 1, ett ),
set( Pid, "kalle", "hobbe" ),
V1 = get( Pid, 1 ),
V2 = get( Pid, "kalle" ),
true = (V1 =:= ett) and (V2 =:= "hobbe"),
erlang:exit( Pid, normal ).
get( Pid, Key ) ->
Pid ! {get, Key, erlang:self()},
receive
{value, Value, Pid} -> Value
end.
loop( List ) ->
receive
{set, Key, Value} -> loop( [{Key, Value} | proplists:delete(Key, List)] );
{get, Key, Pid} ->
Pid ! {value, proplists:get_value(Key, List), erlang:self()},
loop( List )
end.
set( Pid, Key, Value ) -> Pid ! {set, Key, Value}.
Fortran[edit]
Run-time memory allocation is a latter-day feature in Fortran. In the beginning, a programme would either fit in the available memory or it would not. Any local variables declared in subroutines, especially arrays, would have some storage requirement that had been fixed at compile time, and space would be reserved for all of them whether any subroutine would be invoked or not in a particular run. Fixed array sizes were particularly troublesome in subroutines, as pre-specifying some largeish size for all such arrays would soon exhaust the available memory and this was especially annoying when it was never going to be the case that all the arrays had to be available simultaneously because not all the subroutines would be invoked. Thus, developers of complicated calculations, say involving a lot of matrix manipulation, would be forced towards devising some storage allocation scheme involving scratchpad arrays that would be passed as additional parameters for subroutines to use as working storage, and soon enough one escalated to having a "pool" array, with portions being reserved and passed about the various routines as needed for a given run. Possibly escalating to further schemes involving disc storage and a lot of effort, repaid in suddenly having larger problems solvable.
Fortran 90 standardised two ameliorations. A subroutine can now declare arrays whose size is specified at run time, with storage typically organised via a stack, since on exit from the subroutine such storage is abandoned, which is to say, returned to the system pool. Secondly, within a routine, and not requiring entry into a subroutine (nor a
begin ... end; block as in Algol), storage can be explicitly allocated with a specified size for arrays as needed, this time from a "heap" storage pool, and later de-allocated. Again, on exiting the subroutine, storage for such arrays (if declared within the subroutine) is abandoned.
Thus, in a sense, a group of items for which storage has been allocated can have their storage released en-mass by exiting the routine. However, it is not the case that items A, B, C can be allocated in one storage "area" (say called "Able") and another group D, E in a second named area (say "Baker"), and that by discarding "Able" all its components would be de-allocated without the need to name them in tedious detail.So, for example:
SUBROUTINE CHECK(A,N) !Inspect matrix A.
REAL A(:,:) !The matrix, whatever size it is.
INTEGER N !The order.
REAL B(N,N) !A scratchpad, size known on entry..
INTEGER, ALLOCATABLE::TROUBLE(:) !But for this, I'll decide later.
INTEGER M
M = COUNT(A(1:N,1:N).LE.0) !Some maximum number of troublemakers.
ALLOCATE (TROUBLE(1:M**3)) !Just enough.
DEALLOCATE(TROUBLE) !Not necessary.
END SUBROUTINE CHECK !As TROUBLE is declared within CHECK.
Whereas previously a problem might not be solvable via the existing code because of excessive fixed-size storage requirements, now reduced demands can be made and those only for subroutines that are in action. Thus larger problems can be handled without agonising attempts to cut-to-fit, the usage for scratchpads such as B being particularly natural as in Algol from the 1960s. But on the other hand, a run might exhaust the available storage (either via the stack or via the heap) somewhere in the middle of job because its particular execution path made too many requests and the happy anticipation of results is instead met by a mess.
Go[edit]
package main
import (
"fmt"
"runtime"
"sync"
)
// New to Go 1.3 are sync.Pools, basically goroutine-safe free lists.
// There is overhead in the goroutine-safety and if you do not need this
// you might do better by implementing your own free list.
func main() {
// Task 1: Define a pool (of ints). Just as the task says, a sync.Pool
// allocates individually and can free as a group.
p := sync.Pool{New: func() interface{} {
fmt.Println("pool empty")
return new(int)
}}
// Task 2: Allocate some ints.
i := new(int)
j := new(int)
// Show that they're usable.
*i = 1
*j = 2
fmt.Println(*i + *j) // prints 3
// Task 2 continued: Put allocated ints in pool p.
// Task explanation: Variable p has a pool as its value. Another pool
// could be be created and assigned to a different variable. You choose
// a pool simply by using the appropriate variable, p here.
p.Put(i)
p.Put(j)
// Drop references to i and j. This allows them to be garbage collected;
// that is, freed as a group.
i = nil
j = nil
// Get ints for i and j again, this time from the pool. P.Get may reuse
// an object allocated above as long as objects haven't been garbage
// collected yet; otherwise p.Get will allocate a new object.
i = p.Get().(*int)
j = p.Get().(*int)
*i = 4
*j = 5
fmt.Println(*i + *j) // prints 9
// One more test, this time forcing a garbage collection.
p.Put(i)
p.Put(j)
i = nil
j = nil
runtime.GC()
i = p.Get().(*int)
j = p.Get().(*int)
*i = 7
*j = 8
fmt.Println(*i + *j) // prints 15
}
- Output:
3 9 pool empty pool empty 15
J[edit]
The concepts of pools and allocation is foreign to J, and excessively verbose for most purposes. However, this task can be accomplished by relying on J's facilities for dealing with code written in foreign languages.
For example, you can define a class which allocates a pool of integers:
coclass 'integerPool'
require 'jmf'
create=: monad define
Lim=: y*SZI_jmf_
Next=: -SZI_jmf_
Pool=: mema Lim
)
destroy=: monad define
memf Pool
codestroy''
)
alloc=: monad define
assert.Lim >: Next=: Next+SZI_jmf_
r=.Pool,Next,1,JINT
r set y
r
)
get=: adverb define
memr m
)
set=: adverb define
y memw m
)
With this script you can then create instances of this class, and use them. In this case, we will create a pool of three integers:
pool0=: 3 conew 'integerPool'
x0=: alloc__pool0 0
x1=: alloc__pool0 0
x2=: alloc__pool0 0
x0 set__pool0 7
x1 set__pool0 8
x2 set__pool0 9
x0 get__pool0 + x1 get__pool0 + x2 get__pool0
24
Finally, the pool can be destroyed:
destroy__pool0 _
That said, using J's built-in support for integers (and for using them) usually results in better code.
Mathematica[edit]
Mathematica does not allow stack/heap control, so all variables are defined on the heap. However, tags must be given a value for a meaningful assignment to take place.
f[x_] := x^2
Oforth[edit]
This only way to allocate memory is to ask new class method on a class object. This will create an instance of this class on the heap. The heap is managed by the garbage collector.
The stacks (data stack and execution stack) only holds addresses of these objects. There is no object created on the stacks, apart small integers.
There is no user-defined storage pool and it is not possible to explicitly destroy an object.
Object Class new: MyClass(a, b, c)
MyClass new
ooRexx[edit]
In ooRexx:
- Everything is an object.
- Objects are dynamically allocated.
- Unused objects are garbage collected.
Where objects appear from, or disappear to, is treated as an implementation detail.
Statements, such as assignments, class, method, and routine definitions, and ::requires directives can create objects and assign references to them to variables. Objects can also be referred to from other objects e.g. in collections such as lists. When objects are no longer referenced, the objects become candidates for garbage collection. It is not possible to explicitly destroy an object.
OxygenBasic[edit]
'==============
Class ArenaPool
'==============
string buf
sys pb,ii
method Setup(sys n) as sys {buf=nuls n : pb=strptr buf : ii=0 : return pb}
method Alloc(sys n) as sys {method=pb+ii : ii+=n}
method Empty() {buf="" : pb=0 : ii=0}
end class
macro Create(type,name,qty,pool)
type name[qty] at (pool##.alloc qty * sizeof type)
end macro
'====
'DEMO
'====
ArenaPool pool : pool.setup 1000 * sizeof int
Create int,i,100,pool
Create int,j,100,pool
j[51] <= 1,2,3,4,5
print j[51] j[52] j[53] j[54] j[55] 'result 15
pool.empty
PARI/GP[edit]
GP has no particular control over the layout of objects in memory.
PARI allocates objects on the PARI stack by default, but objects can be allocated on the heap if desired.
pari_init(1<<20, 0); // Initialize PARI with a stack size of 1 MB.
GEN four = addii(gen_2, gen_2); // On the stack
GEN persist = gclone(four); // On the heap
Pascal[edit]
The procedure New allocates memory on the heap:
procedure New (var P: Pointer);
The Pointer P is typed and the amount of memory allocated on the heap matches the type. Deallocation is done with the procedure Dispose. In ObjectPascal constructors and destructors can be passed to New and Dispose correspondingly. The following example is from the rtl docs of Free_Pascal .
Instead of implicit specification of the amount of memory using a type, the explicit amount can directly specified with the procedure getmem (out p: pointer; Size: PtrUInt);
Perl 6[edit]
Perl 6 is a high level language where, to a first approximation, everything is an object. Perl 6 dynamically allocates memory as objects are created and does automatic garbage collection and freeing of memory as objects go out of scope. There is almost no high level control over how memory is managed, it is considered to be an implementation detail of the virtual machine on which it is running.
If you absolutely must take manual control over memory management you would need to use the foreign function interface to call into a language that provides the capability, but even that would only affect items in the scope of that function, not anything in the mainline process.
There is some ability to specify data types for various objects which allows for (but does not guarantee) more efficient memory layout, but again, it is considered to be an implementation detail, the use that the virtual machine makes of that information varies by implementation maturity and capabilities.
Phix[edit]
Phix applications do not generally need to allocate and free memory explicitly, except for use in ffi, and even then the cffi package or any of the GUI wrappers can handle most or all of it for you automatically. Both arwen and win32lib (both now superceded by pGUI, and note that that both are 32-bit only, with 4-byte alignment) contain arena storage implementations which may be of interest: see eg allocate_Rect() in demo\arwen\Quick_Allocations.ew, which also offers performance benefits via a circular buffer for short-term use, and also w32new_memset()/w32acquire_mem()/w32release_mem() in win32lib.
The simplest approach however is to rely on automatic memory management (as used by pGUI, and first implemented after arwen and win32lib were originally written):
atom mem = allocate(size,true)
If the optional cleanup flag is non-zero (or true, as above), the memory is automatically released once it is no longer required (ie when the variable mem drops out of scope or gets overwritten, assuming you have not made a copy of it elsewhere, which would all be handled quite properly and seamlessly, with the deallocation not happening until all copies were also overwritten or discarded), otherwise (ie cleanup is zero or omitted) the application should invoke free() manually.
For completeness, here is a very simplistic arena manager, with just a single pool, not that it would be tricky to implement multiple pools:
sequence ap = {}
function ap_allocate(integer size)
-- allocate some memory and add it to the arena pool 'ap' for later release
atom res = allocate(size)
ap = append(ap,res)
return res
end function
procedure ap_free()
-- free all memory allocated in arena pool 'ap'
free(ap)
ap = {}
end procedure
PicoLisp[edit]
PicoLisp allocates any kind of data from a single pool, because everything is built out of a "cell" primitive. Most of this allocation happens automatically, but can also be done explicitly with 'new' or 'box'. For memory-allocated objects, there is no explicit way of freeing them. Database objects can be freed with 'zap'.
PL/I[edit]
Allocation of storage other than via routine or block entry is via the ALLOCATE statement applied to variables declared with the CONTROLLED attribute. Such storage is obtained from and returned to a single "heap" storage area during the course of execution and not necessarily corresponding to the entry and exit of routines or blocks. However, variables can further be declared as being BASED on some other variable which might be considered to be a storage area that can be manipulated separately. This can be escalated to being based IN the storage area of a named variable, say POOL. In this situation, storage for items declared IN the POOL are allocated and de-allocated within the storage space of POOL (and there may be insufficient space in the POOL, whereupon the AREA error condition is raised) so this POOL, although obtained from the system heap, is treated as if it were a heap as well.
One reason for doing this is that the addressing of entities within the POOL is relative to the address of the POOL so that pointer variables linking items with the POOL do not employ the momentary machine address of the POOL storage. The point of this is that the contents of a POOL may be saved and read back from a suitable disc file (say at the start of a new execution) and although the memory address of the new POOL may well be different from that during the previous usage, addressing within the new POOL remains the same. In other words, a complex data structure can be developed within the POOL then saved and restored simply by writing the POOL and later reading it back, rather than having to unravel the assemblage in some convention that can be reversed to read it back piece-by-piece. Similarly, if the POOL is a CONTROLLED variable, new POOL areas can be allocated and de-allocated at any time, and by de-allocating a POOL, all of its complex content vanishes in one poof.
Python[edit]
In Python:
- Everything is an object.
- Objects are dynamically allocated.
- Unused objects are garbage collected.
Where objects appear from, or disappear to, is treated as an implementation detail.
Statements, such as assignments, class and function definitions, and import statements can create objects and assign names to them which can be seen as assigning a reference to objects. Objects can also be referred to from other objects e.g. in collections such as lists.
When names go out of scope, or objects are explicitly destroyed, references to objects are diminished. Python's implementation keeps track of references to objects and marks objects that have no remaining references so that they become candidates for 'garbage collection' at a later time.
Racket[edit]
As is common with high-level languages, Racket usually deals with memory automatically. By default, this means using a precise generational GC. However, when there's need for better control over allocation, we can use the malloc() function via the FFI, and the many variants that are provided by the GC:
(malloc 1000 'raw) ; raw allocation, bypass the GC, requires free()-ing
(malloc 1000 'uncollectable) ; no GC, for use with other GCs that Racket can be configured with
(malloc 1000 'atomic) ; a block of memory without internal pointers
(malloc 1000 'nonatomic) ; a block of pointers
(malloc 1000 'eternal) ; uncollectable & atomic, similar to raw malloc but no freeing
(malloc 1000 'stubborn) ; can be declared immutable when mutation is done
(malloc 1000 'interior) ; allocate an immovable block with possible pointers into it
(malloc 1000 'atomic-interior) ; same for atomic chunks
(malloc-immobile-cell v) ; allocates a single cell that the GC will not move
REXX[edit]
In the REXX language, each (internal and external) procedure has its
own storage (memory) to hold local variables and other information
pertaining to a procedure.
Each call to a procedure (to facilitate recursion) has its own storage.
Garbage collection can be performed after a procedure finishes executing (either via an EXIT, RETURN, or some other external action), but this isn't specified in the language.
A drop (a REXX verb) will mark a variable as not defined, but doesn't necessarily deallocate its storage, but the freed storage can be used by other variables within the program (or procedure).
Essentially, the method used by a particular REXX interpreter isn't of concern to a programmer as there is but one type of variable (character), and even (stemmed) arrays aren't preallocated or even allocated sequentially in virtual (local) storage (as its elements are defined).
Some REXX interpreters have built-in functions to query how much free memory is available (these were written when real storage was a premium during the early DOS days).
/*REXX doesn't have declarations/allocations of variables, */
/* but this is the closest to an allocation: */
stemmed_array.= 0 /*any undefined element will have this value. */
stemmed_array.1 = '1st entry'
stemmed_array.2 = '2nd entry'
stemmed_array.6000 = 12 ** 2
stemmed_array.dog = stemmed_array.6000 / 2
drop stemmed_array.
Rust[edit]
#![feature(rustc_private)]
extern crate arena;
use arena::TypedArena;
fn main() {
// Memory is allocated using the default allocator (currently jemalloc). The memory is
// allocated in chunks, and when one chunk is full another is allocated. This ensures that
// references to an arena don't become invalid when the original chunk runs out of space. The
// chunk size is configurable as an argument to TypedArena::with_capacity if necessary.
let arena = TypedArena::new();
// The arena crate contains two types of arenas: TypedArena and Arena. Arena is
// reflection-basd and slower, but can allocate objects of any type. TypedArena is faster, and
// can allocate only objects of one type. The type is determined by type inference--if you try
// to allocate an integer, then Rust's compiler knows it is an integer arena.
let v1 = arena.alloc(1i32);
// TypedArena returns a mutable reference
let v2 = arena.alloc(3);
*v2 += 38;
println!("{}", *v1 + *v2);
// The arena's destructor is called as it goes out of scope, at which point it deallocates
// everything stored within it at once.
}
Scala[edit]
Spam deleted.
Tcl[edit]
Tcl does not really expose the heap itself, and while it is possible to use SWIG or Critcl to map the implementation-level allocator into the language, this is highly unusual.
However, it is entirely possible to use a pooled memory manager for Tcl's objects.
The pool engine class itself (a metaclass):
package require Tcl 8.6
oo::class create Pool {
superclass oo::class
variable capacity pool busy
unexport create
constructor args {
next {*}$args
set capacity 100
set pool [set busy {}]
}
method new {args} {
if {[llength $pool]} {
set pool [lassign $pool obj]
} else {
if {[llength $busy] >= $capacity} {
throw {POOL CAPACITY} "exceeded capacity: $capacity"
}
set obj [next]
set newobj [namespace current]::[namespace tail $obj]
rename $obj $newobj
set obj $newobj
}
try {
[info object namespace $obj]::my Init {*}$args
} on error {msg opt} {
lappend pool $obj
return -options $opt $msg
}
lappend busy $obj
return $obj
}
method ReturnToPool obj {
try {
if {"Finalize" in [info object methods $obj -all -private]} {
[info object namespace $obj]::my Finalize
}
} on error {msg opt} {
after 0 [list return -options $opt $msg]
return false
}
set idx [lsearch -exact $busy $obj]
set busy [lreplace $busy $idx $idx]
if {[llength $pool] + [llength $busy] + 1 <= $capacity} {
lappend pool $obj
return true
} else {
return false
}
}
method capacity {{value {}}} {
if {[llength [info level 0]] == 3} {
if {$value < $capacity} {
while {[llength $pool] > 0 && [llength $pool] + [llength $busy] > $value} {
set pool [lassign $pool obj]
rename $obj {}
}
}
set capacity [expr {$value >> 0}]
} else {
return $capacity
}
}
method clearPool {} {
foreach obj $busy {
$obj destroy
}
}
method destroy {} {
my clearPool
}
self method create {class {definition {}}} {
set cls [next $class $definition]
oo::define $cls method destroy {} {
if {![[info object namespace [self class]]::my ReturnToPool [self]]} {
}
}
return $cls
}
}
Example of how to use:
Pool create PoolExample {
variable int
method Init value {
puts stderr "Initializing [self] with $value"
set int $value
incr int 0
}
method Finalize {} {
puts stderr "Finalizing [self] which held $int"
}
method value {{newValue {}}} {
if {[llength [info level 0]] == 3} {
set int [incr newValue 0]
} else {
return $int
}
}
}
PoolExample capacity 10
set objs {}
try {
for {set i 0} {$i < 20} {incr i} {
lappend objs [PoolExample new $i]
}
} trap {POOL CAPACITY} msg {
puts "trapped: $msg"
}
puts -nonewline "number of objects: [llength $objs]\n\t"
foreach o $objs {
puts -nonewline "[$o value] "
}
puts ""
set objs [lassign $objs a b c]
$a destroy
$b destroy
$c destroy
PoolExample capacity 9
try {
for {} {$i < 20} {incr i} {
lappend objs [PoolExample new $i]
}
} trap {POOL CAPACITY} msg {
puts "trapped: $msg"
}
puts -nonewline "number of objects: [llength $objs]\n\t"
foreach o $objs {
puts -nonewline "[$o value] "
}
puts ""
PoolExample clearPool
Produces this output (red text to stderr, black text to stdout):
Initializing ::oo::Obj4::Obj5 with 0 Initializing ::oo::Obj4::Obj6 with 1 Initializing ::oo::Obj4::Obj7 with 2 Initializing ::oo::Obj4::Obj8 with 3 Initializing ::oo::Obj4::Obj9 with 4 Initializing ::oo::Obj4::Obj10 with 5 Initializing ::oo::Obj4::Obj11 with 6 Initializing ::oo::Obj4::Obj12 with 7 Initializing ::oo::Obj4::Obj13 with 8 Initializing ::oo::Obj4::Obj14 with 9 trapped: exceeded capacity: 10 number of objects: 10 0 1 2 3 4 5 6 7 8 9 Finalizing ::oo::Obj4::Obj5 which held 0 Finalizing ::oo::Obj4::Obj6 which held 1 Finalizing ::oo::Obj4::Obj7 which held 2 Initializing ::oo::Obj4::Obj6 with 10 Initializing ::oo::Obj4::Obj7 with 11 trapped: exceeded capacity: 9 number of objects: 9 3 4 5 6 7 8 9 10 11 Finalizing ::oo::Obj4::Obj8 which held 3 Finalizing ::oo::Obj4::Obj9 which held 4 Finalizing ::oo::Obj4::Obj10 which held 5 Finalizing ::oo::Obj4::Obj11 which held 6 Finalizing ::oo::Obj4::Obj12 which held 7 Finalizing ::oo::Obj4::Obj13 which held 8 Finalizing ::oo::Obj4::Obj14 which held 9 Finalizing ::oo::Obj4::Obj6 which held 10 Finalizing ::oo::Obj4::Obj7 which held 11
zkl[edit]
Memory allocation "just happens", unreachable memory is recovered via garbage collection. The closest thing to explicit memory allocation is Data object, which is a bit bucket you can [optionally] set the size of upon creation. However, it grows as needed. The closest thing to "new" is the create method, which tells an object to create a new instance of itself. For this task:
var pool=List(); // pool could be any mutable container
pool.append(Data(0,1234)); // allocate mem blob and add to pool
pool=Void; // free the pool and everything in it.
- Programming Tasks
- Solutions by Programming Task
- Encyclopedia
- Ada
- C
- C++
- Erlang
- Fortran
- Go
- J
- Mathematica
- Oforth
- OoRexx
- OxygenBasic
- PARI/GP
- Pascal
- Perl 6
- Phix
- PicoLisp
- PL/I
- Python
- Racket
- REXX
- Rust
- Scala
- Scala examples needing attention
- Examples needing attention
- Tcl
- Zkl
- Clojure/Omit
- Erlang/Omit
- Haskell/Omit
- Io/Omit
- Lily/Omit
- Logtalk/Omit
- M4/Omit
- Maxima/Omit
- ML/I/Omit
- Oz/Omit
- TI-89 BASIC/Omit | http://rosettacode.org/wiki/Arena_storage_pool | CC-MAIN-2017-34 | refinedweb | 4,571 | 50.06 |
OK, here I'm, back again to tell about another part of this story.
For the GUIs (oops ... guys) who are interested, the previous parts are here:
And since we are at the starting point ... let me confess that this was not originally intended to be the third part of the main article. Why? Because, while developing the docking framework, I've come to the conclusion that - having the code reached a certain maturity - a general revamping should be necessary.
First, to fix some bugs and design flaws; second, to improve code usability and flexibility. So, I started the "part 2 1/2" with the idea to title the next "part 33 1/3". But somebody else already did it, so I come back to a traditional numbering.
This article treats the reasons for the evolution of the code from the previous to this new implementation. Although the general philosophy is the same, there are certain substantial differences. If you are interested in this implementation, just download the code provided with this article. If you are interested in the overall discussion, download also the code of the previous article: it's out of date, but it is the base to compare.
This article assumes you are already familiar with what was discussed in part 2.
Here are some of the fixed bugs of the previous version.
As already stated in the previous article message board, I must very much thank Norman Bates, for discovering a bug that - on analyzing - I discovered to be a "design bug".
WrpBase::~WrpBase was originally calling pThis->Detach(). This is wrong for two reasons:
WrpBase::~WrpBase
pThis->Detach()
pThis()
WrpBase
W
Detach
Relese
There is apparently no way to get out of this: auto-detaching of wrappers is the strength of wrappers, but ... that's tricky.
The solution is ... call Detach() from every WrpBase derived class destructor. And to avoid forgetting some, I placed an ASSERT inside the WrpBase destructor: no _pRC must yet exist at that time!
Detach()
ASSERT
_pRC
This is the equivalent in calling "DestroyWindow" in an MFC CWnd derived destructor.
DestroyWindow
CWnd
Other nasty bugs were hidden in Detach and XRefCount_base, related to type safety. But, because I restructured, they have been completely eliminated by the new design. But let me first finish out the bugs.
XRefCount_base
Well, in fact SString works, but I did a flaw in its parent:
SString
SShare::GetBuffer is implemented via _OwnBuffer, but _OwnBuffer was erroneously implemented in terms of lstrcpyn. This was making SShare good only for TCHARs. The implementation had been rewritten in terms of XBuffData::copyfrom, that copies the data using " ="and compares using " ==". It is now suitable for every class that is assignable and comparable for equality, and has a "nullvalue".
SShare::GetBuffer
_OwnBuffer
lstrcpyn
SShare
TCHAR
XBuffData::copyfrom
=
==
Of course, SString continues to be a SShare<TCHAR, _T('\0')>. But now, you may even have SShare<double, 0.>, or SSHrae<SSomeStruct, SSomestruct()> if it may have sense!
SShare<TCHAR, _T('\0')>
SShare<double, 0.>
SSHrae<SSomeStruct, SSomestruct()>
OK, finished with holding annoying bugs, let's go further.
While working to add features, I had a bad feeling by looking at the Solutions Explorer and opening the class view over the NWin namespace: they're risking to become monsters.
NWin
For this reason, I decided to introduce a more modular conceptualization, by splitting the library: NLib will be only the "core library". All the "features" will go into separated libraries, letting the core to be open to different improvements.
So: what will be the designated module to be part of the "core"?
GE_INLINE
GE_ONCE
interface
VERIFY
STL collections and algorithms
All of the modules in NLIB will change their interfaces in future, only to add new functions or members. No existent function prototypes should be modified anymore.
Will contain all modules that use NLib to implement GUI other features, like owner drawn menu, docking etc.
It actually contains CmdUpdt.cpp, CmdImgs.cpp, and associated resource script to include in the application resource script (Ngui.rc and NGui_res.h), plus all the deployment of this stage.
When using different modules, a common convention must be established about the usage of IDs in resource files.
Considering the use that Windows does with message codes, command codes etc., it is probably better to avoid ID lower than 0x2000 (WM_USER + OCM__BASE), that can be rounded to decimal 8200.
WM_USER
OCM__BASE
Thus, NLIB_FIRST is 8200 and NGUI_FIRST is 8300. These numbers can be good for control IDs.
NLIB_FIRST
NGUI_FIRST
For commands, it is preferable to stay in the higher part of a WORD (from 32768 and up) to avoid to confuse commands with control notifications. Hence, commands can have the same previous numbering convention, but adding an offset of decimal 40000.
WORD
Thus, 83xx will be NGUI resources, and 483xx will be NGUI commands.
Note that 32768 (0x8000) is also the default value used by EMenuUptator to decide if sending or not its query messages. Commands having values less that 32768 will not be subject to autoupdating.
EMenuUptator
The very first heavy redesign is about Wrp.h. Yes, the core has been restyled to allow better performance and operations.
In the previous versions, all wrappers inherit a traits function that casts to LPVOID the wrapped type. This value is used as a key in a static global reverse map (XMap) that allows to find the XRefCount_base associated to the wrapped value (usually, a Windows handle or a pointer).
LPVOID
XMap
There are two main drawbacks in doing this.
First: consider an application with hundred windows, and a "document" (the application data) consisting in a collection of thousand polymorphic objects sustained by smart pointers (that are wrappers of pointers!). As a consequence, the reverse map becomes huge (and hence, slow) and visited very frequently (practically every time Windows sends a message).
Second: Consider an object that is wrapped by wrappers of different types using different refcounting types (for example, to store different kinds of information). If the same instance of that object is wrapped by more than one wrapper of different types, because only one reference count can exist, some wrappers will not find the required data. This is a problem, because there is no "type safety".
This can be avoided by specializing the maps.
To do this:
types_wrp_xxx
struct
typedef
TKey
EMap
types_wrp
H
traits_wrp_xxx::Key
TT::TKey
Attach
In particular:
types_wrp_hnd<H>
typedef typename H TKey;
types_wrp_ptr<H>
typedef typename LPVOID TKey;
types_wrp_rsrc<Data>
typedef typename HRSRC TKey;
Because of these definitions, handles have one map per type (one for every H), while pointers have a single global map (of LPVOIDs). This is required to let polymorphism to work (pointers of different types may point to different components -bases- of the same complex object: it must have a single identity).
Another problem that must be solved is the fact that those maps, being declared inside static functions, are created the first time they are needed. But, if what needs them is a global wrapper, or pointer, or event, this makes their construction to happen after the constructor of the global "needer". And, as a consequence, their destruction will happen before, causing memory corruption.
To avoid this, their destruction must be delayed as late as possible. This is accomplished by creating the maps on the heap, and letting them chain themselves into a global list that is instantiated globally and as soon as possible, and that destroys the contents on deletion.
This is what EMap_base is for, together with EMap<> and EMap<>::TMap and the hidden XInit.
EMap_base
EMap<>
EMap<>::TMap
XInit
Also, in debug mode, I added some statistic feature about maps, that are traced out on program termination.
This is a NWrp::Ptr that doesn't fail in case of NULL non-const dereference. In that case, it calls a "autocreate function", passed as a template parameter (if NULL, a static function doing "new T" is assumed). The SetAutoCreateFn can be called runtime to change the creation function (for example, to pass a function that does a "new" for a type derived from T).
NWrp::Ptr
NULL
new T
SetAutoCreateFn
new
T
Because this kind of functionality is normally required when dynamic and polymorphic types are required, it exists only in the form NWrp::PtrCreator<class, ctretorfunction>::Dynamic.
NWrp::PtrCreator<class, ctretorfunction>::Dynamic
To grant a better type safety, the relation between wrapper, refcounters and maps has been reviewed.
XRefCount is now implemented through a common base all the reference counters inherit from. And is consistently named SRefCount. But the "owners counter" (that is, the counter that defines the wrapped object lifetime) is no more in the SRefCount itself, but is referred from a static map using the same TKey of the wrapper the SRefCount-er is for.
XRefCount
SRefCount
This allows to have refcounter of different classes counting together over a same value associated to the same keyed object, even maintaining different global maps.
XRefChain (now named SRefChain<W>) derives from XRefChain_base<W,D> that, in turn, is derived from XRefCount_base<D>.
XRefChain
SRefChain<W>
XRefChain_base<W,D>
XRefCount_base<D>
W is the wrapper the reference counter or chain is for, and D is the ultimate derived class of the reference counter itself (what W expects as its TRefCount).
D
TRefCount
WrpBase<H,W,t_trais,t_refCount> now has as a default for t_refCount the SRefCount struct.
WrpBase<H,W,t_trais,t_refCount>
t_refCount
Chainable wrappers are derived from WrpBase (previously, they where the same thing) as WrpChainBase<H,W,t_traits,t_refCount>, and have some more functions to get the iterators across the wrapper chain. They are expected to have a SRefChain or derived reference counter and chainer (t_refCount defaults to SRefChain<W>).
WrpChainBase<H,W,t_traits,t_refCount>
SRefChain
Because the reference count and the wrapper are now aware of their respective classes, all functions and callbacks are now type safe (no type casting is necessary).
Owning smart pointers can now be obtained as Ptr<Type>::Static or Ptr<Type>::Dynaimc (the first converts types with static_cast, the second with dynamic_cast). Observing pointers are Qtr<Typr>::Static and ::Dynamic also. Note that every wrapper can be changed from observing to owner by calling the SetOwnership member function. Ptr and Qtr are just shortcuts with different default behavior, but substantially equivalent in their capabilities.
Ptr<Type>::Static
Ptr<Type>::Dynaimc
static_cast
dynamic_cast
Qtr<Typr>::Static
::Dynamic
SetOwnership
Ptr
Qtr
This class is originally a provider for a boolean flag and a function to retrieve it. It is used in EWnd, where "autodeletion" is implemented with a "delete this" referred to a wrapper that's seeing a WM_NCDESTROY (the last message a window sees in its life). That's good for window wrappers existing on heap and owned by the window they wrap.
EWnd
delete this
WM_NCDESTROY
But there is a potential problem in this: suppose your program instantiates a modeless un-owned tool window: it's a popup-window, so it is not a child of your main window. Now, suppose your main window is being closed. All children are destroyed, and a WM_QUIT is posted (it's automatic if the window has an EWnd wrapper around). After all messages are gone, the message loop exits. But the popup is still there: no-one has removed it.
WM_QUIT
For the operating system, that's not a problem (it will do it after the WinMain returns), but no message dispatching is still in place. So, no WM_NCDESTROY is processed by the popup wrapper that survives the program termination (and - in fact - it is a leak!).
WinMain
To avoid this, now EAutodeleteFlags chain together into a static list when set "on" and remove when set "off". The list destruction (at program termination) does a deletion of all objects still in place.
EAutodeleteFlag
Here's the trick:
class EAutodeleteFlag
{
protected:
bool _bHasAutodelete;
struct XChain: public std::list<EAutodeleteFlag*>
{
~XChain()
{ ... }
};
static XChain& AutoDeleteChain() { static XChain c; return c; }
public:
EAutodeleteFlag() { _bHasAutodelete = false; }
virtual ~EAutodeleteFlag() { AutoDeleteChain().remove(this); }
bool HasAutodelete() const { return _bHasAutodelete; }
void Autodelete(bool bOn)
{
if(bOn && !_bHasAutodelete) AutoDeleteChain().push_back(this);
if(!bOn && _bHasAutodelete) AutoDeleteChain().remove(this);
_bHasAutodelete = bOn;
}
};
Just added bool IsEmpty() and bool IsUnit(), with obvious meaning. Also, compare is now aliased with oparator& when one of the operands is of type I (the template parameter).
bool IsEmpty()
bool IsUnit()
compare
oparator&
I
SLimit, is - instead - an empty class with all static template member functions (just a way to group what I didn't want to become "global") performing some frequently "compare and assign" operations, like "let a value not to go over a given maximum" etc.
SLimit
struct SLimit
{
template<class A> static A Min(const A& left, const A& right)
{return (left<right)? left:right; }
template<class A> static A Max(const A& left, const A& right)
{return (right<left)? left:right; }
template<class A> static A& OrMin(A& ref, const A& val)
{ if(val<ref) ref=val; return ref; }
template<class A> static A& OrMax(A& ref, const A& val)
{ if(ref<val) ref=val; return ref; }
template<class A> static A
OrRange(A& rmin, A& rmax, const A& val)
{ OrMin(rmin, val); OrMax(rmax, val); return val; }
template<class A> static A&
AndRange(A& ref, const A& min, const A& max)
{ OrMax(ref, min); OrMin(ref, max); return ref; }
};
To avoid improper mixing between ATL, WTL, MFC, and my macros, I decided to give all them a GE_ prefix. Of course, all namespaces become with GE_ also, so if those initials are not suitable for you ... a global find and replace to all files, and that's it! No risk of improper confusion with macros with same names, doing almost the same things, but not necessarily identical. Especially in mixed environment projects.
GE_
They're all in MessageMap.h, where I also added some more macro specializing WM_PARENTNOIFY flavors.
WM_PARENTNOIFY
In the previous article, I introduced a way to manage command updates, based on NWin::ICmdState::SendQueryNoHandler and NWin::ICmdState::SendQueryUpdate.
NWin::ICmdState::SendQueryNoHandler
NWin::ICmdState::SendQueryUpdate
These functions send the two private messages GE_QUERYCOMMANDHANDLER and GE_UPDATECOMMANDUI. Now, since those messages are related to commands, it is correct to treat them with the same logic of commands when subjected to notification forwarding or reflection.
GE_QUERYCOMMANDHANDLER
GE_UPDATECOMMANDUI
To avoid to modify EWnd behavior every time a new notification is required, I decided to re-implement this feature (and to implement future features based on notification messages) not in terms of WM_USER+xxx message, but in terms of new private WM_NOTIFY (so that it can be forwarded or reflected).
WM_USER+xxx
WM_NOTIFY
And, by the way, I used SNmHdr (see later) to register the notification code.
SNmHdr
ICmdState has been moved in NGDI, and reduced to handle the state of a command (Enable, Gray, Checked and Text).
ICmdState
Images are loaded from bitmaps and stored with various effects in SCmdImgs, and a new interface ICmdImage has been defined for handling setting and retrieving of image association to commands.
SCmdImgs
ICmdImage
Such interfaces are associated to abstract structures that derive from both the interfaces themselves and from NUtil::SNmHdr<> (see later). This makes us able to send notification messages carrying those interfaces (the structs are SCmdStateNotify and SCmdImageNotify).
NUtil::SNmHdr<>
SCmdStateNotify
SCmdImageNotify
By deriving those interfaces, it is possible to implement the virtual functions specifically for the various kinds of interfaces (menu, toolbar, statusbar or whatever).
This is done in supporting EMenuUpdator and EMenuImager, now sending those messages.
EMenuUpdator
EMenuImager
The macros to handle commands have been modified according to the new implementation (GE_COMMAND_xx_HANDLER_U series), while the macros GE_UPDATECOMMANDUI series have been removed. Command updating can be hooked using the new GE_NOTIFY_xx_REGISTREDHANDLER(..., func, type) using - as type - the SCmdXxxxNotify as required.
GE_COMMAND_xx_HANDLER_U
GE_NOTIFY_xx_REGISTREDHANDLER(..., func, type)
SCmdXxxxNotify
A rearrangement of these classes has been done to make the drawings customizable.
In particular, SCmdImgs is now abstract, and SCmdImgsIDE implements it by handling and drawing command images. You can now implement yourself other SYourCmdImgs, handling and drawing those images differently.
SCmdImgsIDE
SYourCmdImgs
SCmdDraw makes use of a new NWrp::PtrCreator smart pointer. This pointer is designed to never fail in dereference, by calling (if NULL) a given "creation" function. In the case of SCmdDraw, we have typdef PtrCreator <SCmdImgs, SCmdImgsIDE::New> PCmdImgs, where "New" is a static function returning new SCmdImgsIDE.
SCmdDraw
NWrp::PtrCreator
typdef PtrCreator <SCmdImgs, SCmdImgsIDE::New> PCmdImgs
New
new SCmdImgsIDE
The PCmdImgs "creature" can be retrieved with the static SCmdImgs& GetCmdImgs() function. Another function (SetCmdImgsType(SCmdImgs* (*pfn)()) clears the PCmdImgs and sets its creation function to the passed value. The next time the pointer is dereferenced, a new SCmdImgs-derived will be created.
PCmdImgs
static SCmdImgs& GetCmdImgs()
SetCmdImgsType(SCmdImgs* (*pfn)()
As a result, there is a single SCmdImgs alive, that can be retrieved through SCmdDraw::GetCmdImgs(), but its type can be set at runtime. That is: if you deploy a number of "drawer"s, you can also design an interface to let the user to select his preferred.
SCmdDraw::GetCmdImgs()
They are the ATL-like messagemap macros used to dispatch WM_NOTIFY messages. They have been easily improved to accept an additional "type" parameter.
type
The new form is now GE_NOTIFY_xxx_TYPEDHANDLER( ... , func, type). (Note: depending on the particular macro, the "..." are code, id, range of IDs, or a combination of these). This allows to specify in the macro the type to which it will be cast, the LPNMHDR carried by the LPARAM message parameter.
GE_NOTIFY_xxx_TYPEDHANDLER( ... , func, type)
code
id
range
LPNMHDR
LPARAM
This allows you to declare message-handlers to have - as a parameter - directly a reference to the required structure (for example, NMLISTVIEW&), not a LPNMHDR to be cast in the function body.
NMLISTVIEW&
To let Windows to reciprocally signal events, Windows provides a message dispatching architecture based on messages (MSG) and some API to send (SendMessage), post (PostMessage), retrieve (GetMessage, PeekMessage) and dispatch (DispatchMessage, WINDOWPROC). In this framework, WINDOWPOC is always an internal subclassing window procedure, and dispatching is done with message maps. Sending messages require instead more attention.
MSG
SendMessage
PostMessage
GetMessage
PeekMessage
DispatchMessage
WINDOWPROC
WINDOWPOC
If sending already defined messages, we can simply call the SendMessage API passing the required parameters. If sending other kinds of messages, we need - at least - to define a way to identify them. This can be done by defining some manifest constant like #define WM_MYMESSAGE (WM_USER+xxx), but, imagine a source composed by various library modules and different components, may be from different developers. There is the need of a very strict numbering convention (to avoid reuse of same IDs in different sources) or something that can automate this.
#define WM_MYMESSAGE (WM_USER+xxx)
NUtil::XId_base provides a static function (UINT NewVal()) that returns the value of an incremented static counter on every call.
NUtil::XId_base
UINT NewVal()
NUtil::SId<T> provides a UINT _getval() function, that returns the value of a static variable initialized to XId_base::NewVal() the first time _getval is called. It also has an operator UINT() returning that value. This allows to associate as many UINTs to as many T types we may want to use with SId.
NUtil::SId<T>
UINT _getval()
XId_base::NewVal()
_getval
operator UINT()
UINT
SId
SNmHdr<N> is a struct having a NMHDR as a first member, initializing its "code" member with (UINT)SId<N>(). Windows used to define the WM_NOTIFY codes in the "commoncontrol" library in the form (OU - xxxU) (hence: from 0xFFFFFFFF down to ... about 3000 codes). Since I made XId_base to start from 0x2000 going up ... there are lots of IDs that can be used.
SNmHdr<N>
NMHDR
(UINT)SId<N>()
(OU - xxxU)
XId_base
SNmHdr<N> has also a LRESULT Send(HWND hTo) function, that does a SendMessage(hTo, WM_NOTIFY, nmhdr.idFrom, (LPARAM)this);. We can so derive a struct (say SMyNotification) from SNnHdr<SMyNotification>, filling its members as required, and call Send.
LRESULT Send(HWND hTo)
SendMessage(hTo, WM_NOTIFY, nmhdr.idFrom, (LPARAM)this);
SMyNotification
SNnHdr<SMyNotification>
Send
To retrieve such a message, I provided some messagemap macros (GE_NOTIFY_REGISTREDHANDLER, GE_NOTIFY_CODE_REGISTREDHANDLER, GE_NOTIFY_RANGE_CODE_REGISTREDHANDLER) taking a "type" parameter, checking uMsg == WM_NOTIFY, and GE_::NUtil::SId<type><TYPE>() == ((LPNMHDR)lParam)->code), calling a function as lResult = func((int)wParam, *(type*)lParam, bHandled).
GE_NOTIFY_REGISTREDHANDLER
GE_NOTIFY_CODE_REGISTREDHANDLER
GE_NOTIFY_RANGE_CODE_REGISTREDHANDLER
uMsg == WM_NOTIFY
GE_::NUtil::SId<type><TYPE>() == ((LPNMHDR)lParam)->code)
lResult = func((int)wParam, *(type*)lParam, bHandled)
So we can place, for example, a GE_NOTIFY_CODE_REGISTREDHANDLER(OnMyHandler, SMyNotification) entry in the message map of a window to call the LRESULT OnMyHandler(int nID, SMyNotification& myntf, bool& bHandled) member function.
GE_NOTIFY_CODE_REGISTREDHANDLER(OnMyHandler, SMyNotification)
LRESULT OnMyHandler(int nID, SMyNotification& myntf, bool& bHandled)
Imagine having a frame window with a child view inside. Menu and toolbars normally belong to the frame, and send WM_COMMAND and WM_NOTIFY to the frame.
WM_COMMAND
But, you may be interested to handle those commands from a child view. You can do this by chaining the massage map of the frame to an alternate message map of the view (but, this means to transfer all messages). Or you can forward only WM_COMMAND or WM_NOTIFY.
That's what the GE_ROUTE_MSG_MAP_xxxx macros are for. You can route to a Class, to a Member, or through a Pointer (NULL pointer check is done).
GE_ROUTE_MSG_MAP_xxxx
There are two distinct series of macros for commands and notify. But remember, if you use autoupdate commands (GE_COMMAND_ID_HANDLER_U), that autoupdate is itself a WM_NOTIFY message. So, if you route commands ... route also WM_NOTIFY in the same way. Or use the macros that call both the series at the same time.
GE_COMMAND_ID_HANDLER_U
Note also that "ROUTE" macros call some::ProcessWindowMessage. This is not like "sending" a message: if a window has multiple wrappers, calling SendMessage makes all wrappers to be able to receive the message in their default message map, while "ROUTE" makes only the passed wrapper to handle the routed messages or commands.
some::ProcessWindowMessage
If you want to re-send a command to a given window, instead of "route" it to a given wrapper, use GE_FORWARD_COMMANDS instead. It resends WM_COMMAND and WM_NOTIFY.
GE_FORWARD_COMMANDS
Another way to tell a window to handle a message originally sent to another one (without confusing it with the ones intended for that window) is "forwarding by encapsulation". The original message is re-sent inside another message to another window. This trick comes with ATL (ATL_FORWRARD_MESSAGE), but here, I generalized it.
ATL_FORWRARD_MESSAGE
GE_FORWARD_MESSAGE(hWndTo, code) sends a WM_GE_FORWARDMSG, whose parameters are a code ID (as WPARAM) and a NWin::XWndProcParams* (as LPARAM: it carries the original message parameters).
GE_FORWARD_MESSAGE(hWndTo, code)
WM_GE_FORWARDMSG
WPARAM
NWin::XWndProcParams*
You can handle it in a destination HWND wrapper message map using GE_WM_FORWARDMSG(func), where "func" is a LRESULT func (NWin::XWndProcParams& msg, DWORD nCode, bool& bHandled);.
HWND
GE_WM_FORWARDMSG(func)
func
LRESULT func (NWin::XWndProcParams& msg, DWORD nCode, bool& bHandled);
Or ... you can "decapsulate" the original message by recursively calling ProcessWindowMessage after a parameter extraction from XWndProcParams.
ProcessWindowMessage
XWndProcParams
That's what can be done with:
GE_WM_FORWARDMSG_ALT(msgMapID): decapsulate the original message and make it available to the same message map, in another ALT_MSG_MAP section. This makes you able to handle the message with the GE_WM_xxx crackers.
GE_WM_FORWARDMSG_ALT(msgMapID)
ALT_MSG_MAP
GE_WM_xxx
GE_WM_FORWARDMSG_ALT_CODE(code, msgMapID): like before, but only those messages whose capsule have been tagged by "code" while resending.
GE_WM_FORWARDMSG_ALT_CODE(code, msgMapID)
The GE_CHAIN_MSG_MAP series has been arranged to have a consistent number of macros: you can chain a default map or a particular "alternate map" (xxx_ALT(..., msgMapID): same concept than ATL).
GE_CHAIN_MSG_MAP
xxx_ALT(..., msgMapID)
And you can chain to:
Combining all this message routing techniques, it is now possible to do almost everything. And considering that every class can be a NWin::IMessageMap derived (no need to be itself a window wrapper), message maps can be a useful way to let classes to communicate without the need to entangle themselves (to be designed to know their reciprocal interface) in a static way.
NWin::IMessageMap
Where more dynamicity is required, the right solution - can be instead - to use "events" (NWrp::Event<A> and NWrp::EventRcv<D>: see part one for a description).
NWrp::Event<A>
NWrp::EventRcv<D>
Note the main difference between the two methods: message maps are macros providing fragments of code. Events are data structures. Message map chains are defined at compile time (when translating the macros). Event dispatching is a completely runtime mechanism.
It is - of course - technically possible to define communication between classes through message maps, and is also possible to convert Windows messages into events. I don't think - however - that the way I support events can be used as-is with messages: chaining message maps allow a number of messages to pass from one map to another. Events - by now - are one to one: a receiver must register individually all events it wants to receive.
Consider a window that can have a number of attached wrappers. You may be interested in seeking one of a given type (or inheriting from a given type). Because HWND wrappers are chained, this can be easily done using a template function and a dynamic_cast, by walking the chain until the cast is non-NULL. More general, bool WrpChainBase::DynamicFind<A>(A*&, TT::IH) does exactly that. But it is implemented in WrpChainBase, so it works on every chained wrapper.
bool WrpChainBase::DynamicFind<A>(A*&, TT::IH)
WrpChainBase
Of course, because we are basing on dynamic_cast, RTTI must be enabled.
Modal dialogs are an asymmetry in the Windows API: the DialoBox Windows function, requires a DLGPROC, but that "proc" is not a real true WNDPROC.
DialoBox
DLGPROC
WNDPROC
The real WNDPROC is private to the system. It calls your procedure and - if returning false - calls DefDlgProc. All this is inside a modal loop, internal to the DialogBox API.
DefDlgProc
DialogBox
To reconduct this in an already existent wrapper, I implemented a EWnd::CreateModalDialog, that - acting as CreateWnd - sets a hook and passes as a DLGPROC an internal hidden function. The hook attaches the creating new window (dialog, in this case) to the requesting wrapper and auto-detaches itself.
EWnd::CreateModalDialog
CreateWnd
The hook procedure has been reviewed to let the hook active for the shortest possible period (avoiding to recourse in the hook function where multiple creation of windows happen in a nested way - think to a parent that creates its children during its own creation process). The provided hidden DLGPROC always returns false, apart for WM_COMMANDs with code between 1 and 7 inclusive (IDOK, IDCANCEL, ..., IDCLOSE: just to have a default processing that returns. Or you'd risk to stuck your program into an "about box"!).
IDOK
IDCANCEL
IDCLOSE
The reviewed hook procedure also corrects a bug: in the previous version, if a still-creating window is wrapped trapping WM_CREATE to create more windows (think to a main window with children), a number of nested hooks are instantiated, but - when returning - only the last is unhooked. Although this has no consequence in functionality (hook procedures do nothing but attach the first wrapper), it may have consequences on performance in certain cases. Now, this cannot anymore happen: the hook "unhooking" is done inside the hook proc itself, before any message is dispatched to any wrapper. No recursions can arise.
WM_CREATE
It's not easy to demonstrate all this in a simple app that does more or less nothing but lets you check almost anything.
The W3 project uses both Nlib and NGUI in doing so. I started creating a frame, wrapping it with EMenuUpdator and EMauImagerIDE, and adding a child window during WM_CREATE.
EMauImagerIDE
I also routed commands to the child, and processed ID_FILE_EXIT in the main window and ID_HELP_ABOUT in the child (OK: that's unusual, but to demonstrate command routing, it's fine!).
ID_FILE_EXIT
ID_HELP_ABOUT
To respond to ID_HELP_ABOUT, I instantiate a modal DialogBox.
There is the need to link the ComCtrl32.lib inport library, and call InitCommonControlsEx. This is an annoying always needed stuff, so I placed all into a class (NUtil::SInitCommonControlsEx). Just instantiate a temporary object calling the constructor passing the required value (I assumed to default to ICC_WIN95_CLASSES), and that's it. The library is linked through a #pragma comment(lib ...) in the NGUI/CommCtrl.h header.
InitCommonControlsEx
NUtil::SInitCommonControlsEx
ICC_WIN95_CLASSES
#pragma comment(lib ...)
Note: I didn't make this initialization implicit (i.e., via a static instantiated object) because it is not necessarily true that every application needs common controls in the same way.
Just to do something with the "About" dialog, I wrapped it with CAboutBox, and I instantiated on creation a timer that does a countdown with a progress bar. When reaching zero, the dialog auto-dismisses. (By pressing OK, you anticipate the dismissing). This is just to demonstrate the messagemap correctly working with a DLGPROC.
CAboutBox
The idea is to let a frame manipulate a client window and a set of docked "bars", with an algorithm similar to the IDE.
DockMan.h and DockMan.cpp contain the required stuff.
In particular, two interfaces define the interactions between dockable objects and frames.
ILayoutManager defines the prototype for the RedoLayout function, while IAutoLayout prototypes the DoLayout and GetSideAlignment functions. Typically, an implementation of ILayoutManager should contain or refer a number of IAutolayout elements to arrange when moved or sized. RedoLayout is called passing the requesting HWND (in case it is to be skipped by the layout algorithm. Normally this parameter is NULL). It should call, in turn, the DoLayout of the contained IAutoLayout, passing a rectangle. The contained IAutoLayout should redesign itself basing on that rectangle end modifying it to the portion of the rectangle that remains uncovered.
ILayoutManager
RedoLayout
IAutoLayout
DoLayout
GetSideAlignment
IAutolayout
In synthesis, ILayoutManager is what defines how a layout should be done. IAutolayout are the components that are laid-out.
The provided implementation - to avoid to entangle the classes - splits the implementation of ILauoutManager into two parts.
ILauoutManager
An internal class (XLayoutProvider) implements on the heap an ILayoutManager, and can be gotten calling the static ILayoutManager::Get(HWND) function: it retrieves (via DynamicFind) an ILayoutManager associated to the passed HWND or (if none is attached) creates an XLayoutProvider and attaches it, after making it an auto-deletable observer.
XLayoutProvider
ILayoutManager::Get(HWND)
DynamicFind
HWND
XLayoutProvider processes the WM_SIZE message by getting the client rectangle and passing it into a typed notify message whose data structure is ILayoutManager::Autonotify.
XLayoutProvider
WM_SIZE
ILayoutManager::Autonotify
At this point, you can attach an arbitrary number of wrappers implementing a handle to this message that, getting the rectangle carried by the data structure and some owned data, decides what to do with any number of embedded or referred IAutoLayout elements.
In particular, EDockBarManager implements ILayoutManager::Autonotify by containing one EClientWnd and four (one per side) EDockBars. Each EDockBar can receive any number of HWND to embed, and - when doing so - attaches an EDockBar::XElement to the passed HWND. This other internal class, is deigned to work with EDockBar, and it is an observer auto-deleting EWnd. It lives on the heap and is destroyed when the window it is attached to is destroyed, and maintains the docking state (the placement), managing docking and undocking of the wrapped HWND. The docking capability, so, it is not to be designed within the passed window, but is "plugged in" when the window's attached.
EDockBarManager
EClientWnd
EDockBar
EDockBar::XElement
Those HWNDs don't need to be any of particular: just regular popup windows created as owned by the parent of the EDockBars (usually an EDockBarManager). They become children of the bars while docked and return to be popup when floating.
Of course, those windows can be controls or toolbars or ... other more complex windows (part 4 of the article will treat this theme).
About commands, both EDockBar and EdockBar::XElem forward received commands to the parent (or owner), while EDockBarManager forwards them to the client window. This creates a sort of MFC-like command routing.
EdockBar::XElem
In the sample app, I create a CMainWnd that - in turn - creates 8 differently positioned bars (see CMainWnd::OnCreate). Some are movable, some others are sizable.
CMainWnd
CMainWnd::OnCreate
Some bars have a regular "close button". If you "close" them, it will act as normally, destroying the bar (and since it is wrapped by an internal auto-deleting observing wrapper, the wrapper is also destroyed). I didn't provide any interface to manage bar creation or hiding, since it was not in the scope of these classes.
Note the NUtil::STrace::_Filter() = 2; in WinMain: it is to avoid the STrace class to display, in the debug output, lots of messages coming from the message dispatching and from the GDI Handle wrapping-unwrapping activities.
NUtil::STrace::_Filter() = 2
STrace
I'm actually working on a unified model for toolbar, statusbar, menubar etc., and for an IDE-like interface for inner windows. They'll be the subject for part. | http://www.codeproject.com/Articles/7190/Writing-Win32-Apps-with-C-only-classes-part-3?fid=52960&tid=839028 | CC-MAIN-2015-27 | refinedweb | 5,447 | 53.21 |
Strings are an essential data type in Python that are used in nearly every application. In this article we learn about the most essential built-in string methods.
With this resource you will be equipped with all the tips and knowledge you need to work with strings easily and you will be able to modify them without any problems.
1. Slicing
With slicing we can access substrings. It can get optional start and stop indices.
s = ' hello ' s = s[3:8] # no crash if s[3:20] # 'hello'
2. strip()
Return a copy of the string with the leading and trailing characters removed. The chars argument is a string specifying the set of characters to be removed. If omitted or None, the chars argument defaults to removing whitespace.
s = ' hello '.strip() # 'hello'
3./4. lstrip() and rstrip()
lstrip([chars]): Return a copy of the string with leading characters removed.
rtrip([chars]): Return a copy of the string with trailing characters removed.
s = ' hello '.lstrip() # 'hello ' s = ' hello '.rstrip() # ' hello'
strip() with character
We can specify character(s) instead of removing the default whitespace.
s = '###hello###'.strip('#') # 'hello'
Careful: only leading and trailing found matches are removed:
s = ' \n \t hello\n'.strip('\n') # -> not leading, so the first \n is not removed! # ' \n \t hello' s = '\n\n \t hello\n'.strip('\n') # ' \t hello'
strip() with combination of characters
The chars argument is a string specifying the set of characters to be removed. So all occurrences of these characters are removed, and not the particular given string.
s = ''.strip('cmow.') # 'example'
5./6. removeprefix() and removesuffix()
Like seen before, strip, lstrip, and rstrip remove all occurrences of the passed chars string. So if we just want to remove the given string, we can use remove prefix and removesuffix.
s = 'Arthur: three!'.lstrip('Arthur: ') # 'ee!' s = 'Arthur: three!'.removeprefix('Arthur: ') # 'three!' s = 'HelloPython'.removesuffix('Python') # 'Hello'
7. replace()
Return a copy of the string with all occurrences of substring old replaced by new.
s = ' \n \t hello\n'.replace('\n', '') # ' \t hello'
8. re.sub()
If we want to replace a specific pattern with another character, we can use the re module and use a regular expression.
import re s = "string methods in python" s2 = re.sub("\s+" , "-", s) # 'string-methods-in-python'
More about regular expression can be learned in this Crash Course.
9. split()
Return a list of the words in the string, using sep as the delimiter string. If maxsplit is given, at most maxsplit splits are done.
s = 'string methods in python'.split() # ['string', 'methods', 'in', 'python'] s = 'string methods in python'.split(' ', maxsplit=1) # ['string', 'methods in python']
10. rsplit()
Return a list of the words in the string, using sep as the delimiter string. If maxsplit is given, at most maxsplit splits are done, the rightmost ones.
s = 'string methods in python'.rsplit() # ['string', 'methods', 'in', 'python'] s = 'string methods in python'.rsplit(' ', maxsplit=1) # ['string methods in', 'python']
11. join()
Return a string which is the concatenation of the strings in iterable.
list_of_strings = ['string', 'methods', 'in', 'python'] s = ' '.join(list_of_strings) # 'string methods in python'
12./13./14. upper(), lower(), capitalize()
Return a copy of the string with all the cased characters converted to uppercase, lowercase, or first character capitalized and the rest lowercased.
s = 'python is awesome!'.upper() # 'PYTHON IS AWESOME!' s = 'PYTHON IS AWESOME!'.lower() # 'python is awesome!' s = 'python is awesome!'.capitalize() # 'Python is awesome!'
15./16. islower(), isupper()
Checks if the string consist only of upper or lower characters.
'PYTHON IS AWESOME!'.islower() # False 'python is awesome!'.islower() # True 'PYTHON IS AWESOME!'.isupper() # True 'PYTHON IS awesome!'.isupper() # False
17./18./19. isalpha(), isnumeric(), isalnum()
isalpha(): Return True if all characters in the string are alphabetic and there is at least one character, False otherwise.
isnumeric(): Return True if all characters in the string are numeric characters, and there is at least one character, False otherwise.
isalnum(): Return True if all characters in the string are alphanumeric and there is at least one character, False otherwise.
s = 'python' print(s.isalpha(), s.isnumeric(), s.isalnum() ) # True False True s = '123'print(s.isalpha(), s.isnumeric(), s.isalnum() ) # False True True s = 'python123' print(s.isalpha(), s.isnumeric(), s.isalnum() ) # False False True s = 'python-123' print(s.isalpha(), s.isnumeric(), s.isalnum() ) # False False False
20. count()
Return the number of non-overlapping occurrences of substring sub in the range [start, end].
n = 'hello world'.count('o') # 2
21. find()
Return the lowest index in the string where substring sub is found within the slice s[start:end].
s = 'Machine Learning' idx = s.find('a') print(idx) # 1 print(s[idx:]) # 'achine Learning' idx = s.find('a', 2) print(idx) # 10 print(s[idx:]) # 'arning'
22. rfind()
Return the highest index in the string where substring sub is found, such that sub is contained within s[start:end].
s = 'Machine Learning' idx = s.rfind('a') print(idx) # 10
23./24. startswith() and endswith()
Return True if string starts/ends with the prefix/suffix, otherwise return False.
s = 'Patrick'.startswith('Pat') # case sensitive! # True s = 'Patrick'.endswith('k') # case sensitive! # True
25. partition().
s = 'Python is awesome!' parts = s.partition('is') # ('Python ', 'is', ' awesome!') parts = s.partition('was') # ('Python is awesome!', '', '')
26./27./28 center(), ljust(), rjust()
center(): Return centered in a string of length width. Padding is done using the specified fillchar (default is a space).
ljust(): Return the string left justified in a string of length width. Padding is done using the specified fillchar (default is a space).
rjust(): Return the string right justified in a string of length width. Padding is done using the specified fillchar (default is a space).
s = 'Python is awesome!' s = s.center(30, '-') # ------Python is awesome!------ s = 'Python is awesome!' s = s.ljust(30, '-') # Python is awesome!------------ s = 'Python is awesome!' s = s.rjust(30, '-') # ------------Python is awesome!
29. f-Strings
Since Python 3.6, f-strings can be used to format strings. They are more readable, more concise, and also faster!
num = 1 language = 'Python' s = f'{language} is the number {num} in programming!' # 'Python is the number 1 in programming!'
30. swapcase()
Return a copy of the string with uppercase characters converted to lowercase and vice versa.
s = 'HELLO world' s = s.swapcase() # 'hello WORLD'
31. zfill()
Return a copy of the string left filled with ‚0‘ digits to make a string of length width. A leading sign prefix (‚+‘/'-') is handled by inserting the padding after the sign character rather than before.
s = '42'.zfill(5) # '00042' s = '-42'.zfill(5) # '-0042'
More on Strings
More information about Strings in Python can be found in this article: Strings - Advanced Python 05. | https://www.python-engineer.com/posts/string-methods-python/ | CC-MAIN-2022-21 | refinedweb | 1,120 | 70.29 |
External modules
There is a lot of power and usability packed into the TypeScript external module pattern. Here we discuss its power and some patterns needed to reflect real world usages.
Clarification: commonjs, amd, es modules, others
First up we need to clarify the (awful) inconsistency of the module systems out there. I'll just give you my current recommendation and remove the noise i.e. not show you all the other ways things can work.
From the same TypeScript you can generate different JavaScript depending upon the
module option. Here are things you can ignore (I am not interested in explaining dead tech):
- AMD: Do not use. Was browser only.
- SystemJS: Was a good experiment. Superseded by ES modules.
- ES Modules: Not ready yet.
Now these are just the options for generating the JavaScript. Instead of these options use
module:commonjs
How you write TypeScript modules is also a bit of a mess. Again here is how not to do it today:
import foo = require('foo'). i.e.
import/require. Use ES module syntax instead.
Cool, with that out of the way, lets look at the ES module syntax.
Summary: Use
module:commonjsand use the ES module syntax to import / export / author modules.
ES Module syntax
- Exporting a variable (or type) is as easy as prefixing the keyword
exporte.g.
// file `foo.ts` export let someVar = 123; export type SomeType = { foo: string; };
- Exporting a variable or type in a dedicated
exportstatement e.g.
// file `foo.ts` let someVar = 123; type SomeType = { foo: string; }; export { someVar, SomeType };
- Exporting a variable or type in a dedicated
exportstatement with renaming e.g.
// file `foo.ts` let someVar = 123; export { someVar as aDifferentName };
- Import a variable or a type using
importe.g.
// file `bar.ts` import { someVar, SomeType } from './foo';
- Import a variable or a type using
importwith renaming e.g.
// file `bar.ts` import { someVar as aDifferentName } from './foo';
Import everything from a module into a name with
import * ase.g.
// file `bar.ts` import * as foo from './foo'; // you can use `foo.someVar` and `foo.SomeType` and anything else that foo might export.
Import a file only for its side effect with a single import statement:
import 'core-js'; // a common polyfill library
- Re-Exporting all the items from another module
export * from './foo';
- Re-Exporting only some items from another module
export { someVar } from './foo';
- Re-Exporting only some items from another module with renaming
export { someVar as aDifferentName } from './foo';
Default exports/imports
As you will learn later, I am not a fan of default exports. Nevertheless here is syntax for export and using default exports
- Export using
export default
- before a variable (no
let / const / varneeded)
- before a function
- before a class
// some var export default someVar = 123; // OR Some function export default function someFunction() { } // OR Some class export default class SomeClass { }
- Import using the
import someName from "someModule"syntax (you can name the import whatever you want) e.g.
import someLocalNameForThisFile from "../foo";
Module paths
I am just going to assume
moduleResolution: commonjs. This is the option you should have in your TypeScript config. This setting is implied automatically by
module:commonjs.
There are two distinct kinds of modules. The distinction is driven by the path section of the import statment (e.g.
import foo from 'THIS IS THE PATH SECTION').
- Relative path modules (where path starts with
.e.g.
./someFileor
../../someFolder/someFileetc.)
- Other dynamic lookup modules (e.g.
'core-js'or
'typestyle'or
'react'or even
'react/core'etc.)
The main difference is how the module is resolved on the file system.
I will use a conceptual term place that I will explain after mentioning the lookup pattern.
Relative path modules
Easy, just follow the relative path :) e.g.
- if file
bar.tsdoes
import * as foo from './foo';then place
foomust exist in the same folder.
- if file
bar.tsdoes
import * as foo from '../foo';then place
foomust exist in a folder up.
- if file
bar.tsdoes
import * as foo from '../someFolder/foo';then one folder up, there must be a folder
someFolderwith a place
foo
Or any other relative path you can think of :)
Dynamic lookup
When the import path is not relative, lookup is driven by node style resolution. Here I only give a simple example:
You have
import * as foo from 'foo', the following are the places that are checked in order
./node_modules/foo
../node_modules/foo
../../node_modules/foo
- Till root of file system
You have
import * as foo from 'something/foo', the following are the places that are checked in order
./node_modules/something/foo
../node_modules/something/foo
../../node_modules/something/foo
- Till root of file system
What is place
When I say places that are checked I mean that the following things are checked in that place. e.g. for a place
foo:
- If the place is a file, e.g.
foo.ts, hurray!
- else if the place is a folder and there is a file
foo/index.ts, hurray!
- else if the place is a folder and there is a
foo/package.jsonand a file specified in the
typeskey in the package.json that exists, then hurray!
- else if the place is a folder and there is a
package.jsonand a file specified in the
mainkey in the package.json that exists, then hurray!
By file I actually mean
.ts /
.d.ts and
.js.
And that's it. You are now module lookup experts (not a small feat!).
Overturning dynamic lookup just for types
You can declare a module globally for your project by using
declare module 'somePath' and then imports will resolve magically to that path
e.g.
// globals.d.ts declare module 'foo' { // Some variable declarations export var bar: number; /*sample*/ }
and then:
// anyOtherTsFileInYourProject.ts import * as foo from 'foo'; // TypeScript assumes (without doing any lookup) that // foo is {bar:number}
import/require for importing type only
The following statement:
import foo = require('foo');
actually does two things:
- Imports the type information of the foo module.
- Specifies a runtime dependency on the foo module.
You can pick and choose so that only the type information is loaded and no runtime dependency occurs. Before continuing you might want to recap the declaration spaces section of the book.
If you do not use the imported name in the variable declaration space then the import is completely removed from the generated JavaScript. This is best explained with examples. Once you understand this we will present you with use cases.
Example 1
import foo = require('foo');
will generate the JavaScript:
That's right. An empty file as foo is not used.
Example 2
import foo = require('foo'); var bar: foo;
will generate the JavaScript:
var bar;
This is because
foo (or any of its properties e.g.
foo.bas) is never used as a variable.
Example 3
import foo = require('foo'); var bar = foo;
will generate the JavaScript (assuming commonjs):
var foo = require('foo'); var bar = foo;
This is because
foo is used as a variable.
Use case: Lazy loading
Type inference needs to be done upfront. This means that if you want to use some type from a file
foo in a file
bar you will have to do:
import foo = require('foo'); var bar: foo.SomeType;
However, you might want to only load the file
foo at runtime under certain conditions. For such cases you should use the
imported name only in type annotations and not as a variable. This removes any upfront runtime dependency code being injected by TypeScript. Then manually import the actual module using code that is specific to your module loader.
As an example, consider the following
commonjs based code where we only load a module
'foo' on a certain function call:
import foo = require('foo'); export function loadFoo() { // This is lazy loading `foo` and using the original module *only* as a type annotation var _foo: typeof foo = require('foo'); // Now use `_foo` as a variable instead of `foo`. }
A similar sample in
amd (using requirejs) would be:
import foo = require('foo'); export function loadFoo() { // This is lazy loading `foo` and using the original module *only* as a type annotation require(['foo'], (_foo: typeof foo) => { // Now use `_foo` as a variable instead of `foo`. }); }
This pattern is commonly used:
- in web apps where you load certain JavaScript on particular routes,
- in node applications where you only load certain modules if needed to speed up application bootup.
Use case: Breaking Circular dependencies
Similar to the lazy loading use case certain module loaders (commonjs/node and amd/requirejs) don't work well with circular dependencies. In such cases it is useful to have lazy loading code in one direction and loading the modules upfront in the other direction.
Use case: Ensure Import
Sometimes you want to load a file just for the side effect (e.g. the module might register itself with some library like CodeMirror addons etc.). However, if you just do a
import/require the transpiled JavaScript will not contain a dependency on the module and your module loader (e.g. webpack) might completely ignore the import. In such cases you can use a
ensureImport variable to ensure that the compiled JavaScript takes a dependency on the module e.g.:
import foo = require('./foo'); import bar = require('./bar'); import bas = require('./bas'); const ensureImport: any = foo || bar || bas; | https://basarat.gitbooks.io/typescript/content/docs/project/external-modules.html | CC-MAIN-2019-04 | refinedweb | 1,549 | 65.83 |
I know that JSON isn't built into ExtendScript but even with the downloaded lib I do not understand how you are supposed to use this?
If I try and use #include in my jsx-file the entire script comes to a halt (nothing gets loaded)
main.jsx
#include "../js/libs/json2.js" alert("this does not trigger! - script halted on line above");
So what's the approach here? Send the file path from JS to JSX, create a file object and read every line in JSX, returing the file string back to JS and then parse it?
I understand that you're referring to an HTML Panel context, am I correct?
If this is the case, yes, include will not work. You have to send the extension's path down to the JSX from the JS, then evaluate json2.js with $.evalFile().
From that point onwards, the JSON object should be available to your extendscript context.
Davide Barranca
If you want to parse JSON in extendscript, you can use a polyfill. Here's an easy example you can copy/paste: generator-gizmo/rootscript.jsx at master · codearoni/generator-gizmo · GitHub
Just paste that at the top of you extendscript file and you should be good to go.
I moved the parsing to the JS file instead but it appears that you can't even use JSON there! JSON is undefined -_-
Why did Adobe make things so bloody complicated?
EDIT
I put the JSON.parse() inside a try-catch and I'm getting syntax errors (which is weird as I've used several online validators on my JSON code)
Exception:SyntaxError: Unexpected token :
JSON
{ "lolk": [ { "icon": "/img/picA.png", "path": "/scripts/file.a", "text": "somethingA" }, { "icon": "/img/picB.png", "path": "/scripts/file.b", "text": "somethingB" }, { "icon": "/img/picC.png", "path": "/scripts/file.c", "text": "somethingC" } ] }
EDIT2
JSON.parse() seems to be completely broken?
try{ var lol = JSON.parse("{'lolk':'stuff'}"); // Exception:SyntaxError: Unexpected token ' alert(lol); } catch(e) { alert("Exception:" + e + "\nCould not run JSON.parse()"); }
Summary
-The JSON -library json2.js can be "included" in the main.js file by running evalFile(pathToFile). I assume this also works in JSX. Thanks goes to Davide Barranca for showing us the way!
-You cannot use #include in the JSX file for Photoshop as this will completely break the execution
-When reading the file, you have to do it all in one go - not line-by-line (see below).
I read the JSON-file incorrectly, which broke JSON.parse()
For some reason I had setup the function that did the reading, to read the file row-by-row, like this:
JSX
file = new File(filePath); file.open("r", "TEXT"); var fileString = ""; while (!file.eof){ var line = file.readln(); if (fileString.indexOf(line) == -1){ fileString += line; } } return fileString;
Which seems to add characters that the parser does not like.
I changed it so that it just reads the entire file
JSX
var scriptFile = File(filePath); scriptFile.open('r'); var content = scriptFile.read(); scriptFile.close(); | https://forums.adobe.com/thread/2210757 | CC-MAIN-2018-47 | refinedweb | 502 | 77.03 |
Writing useful test assertions with Node’s assert libraryJuly 2, 2017
During the last week I spent a few hours working on unit tests of JavaScript projects. Some of it happened in Babel where I made some contributions increasing overall test coverage. But the bigger part was for a new JavaScript project I started for myself where I take a testing-first approach. That means I dealt with a lot of failing tests and had to deal with their error messages. There is a number of ways of writing assertions, but I always try to keep it as simple as possible. I like to think that ideal tests should not only test code, but also provide insight into the API of modules and serve as living documentation. That is why I try to keep complexity and abstraction as low as possible in test files. But this week I looked into different ways of writing assertions and how introducing a bit of abstraction can make tests more helpful, especially failing ones. Below is a simplified example of how I improved my Mocha 1 test suite. In this case, the assertion tests whether the function returns an object of the correct class ( I understand that this can be done by a type system, however this is not the point of this post ).
import assert from 'assert' import someFunc from './someFunc' import Result from './Result' describe('The return value of someFunc', () => { // Assume this returns 17 const returnValue = someFunc() it('should be a Result instance', () => { // Useless error message: "AssertionError: false == true" assert(returnValue instanceof Result) // More helpful error message: "AssertionError: Expected Result instance." // But cumbersome to read and write assert(returnValue instanceof Result, 'Expected Result instance.') // Most helpful error message: "Expected 17 to be a Result instance." // Readable and reusable assertResult(returnValue) }) }) // Can be moved into a separate module if needed across different test files function assertResult(val) { assert(val instanceof Result, `Expected ${val} to be a Result instance.`) }
This is really just the power of functions but it felt like a major upgrade to my testing workflow. There is also a number of libraries that have generic version of these assertion functions that allow you to make more specific checks than Node’s
assert 2. These obviously help with better error messages, but going with custom wrappers around raw assertions is quite nice as well. For my own project I have written 109 tests so far and am happy with this approach. Maybe I will introduce another assertion library on top in the future, but for now this provides a nice testing workflow to me. | http://beta.maurobringolf.ch/2017/07/writing-useful-test-assertions-with-nodes-assert-library/ | CC-MAIN-2018-17 | refinedweb | 432 | 59.53 |
Due at 11:59pm on 4/15/2015.
Download lab. the
__next__ method:
__iter__ and
__next__. In other words, an object that implements
the iterable interface must implement an
__iter__ method that returns
an object that implements the
__next__ method.
Here is a table summarizing the required methods of the iterable and iterator interfaces/protocols. Python also has more documentation about iterator types.
This object that implements the
__next__ method is called an
iterator. While the iterator interface also requires that the object
implement the
__next__ and
__iter__ methods, it does not require
the
__iter__ method to return a new object — just itself (with state about
its current position).
One analogy: an iterable is a book (one can flip through the pages) and an iterator is a bookmark (saves the position and can then locate the next page). 6
This is somewhat equivalent to running:
t = AnIterator() t = t.__iter__() try: while True: print(t.__next__()) except StopIteration as e: pass
Try running each of the given iterators in a
for loop. Why does each
work or not work?
class IteratorA: def __init__(self): self.start = 5 def __next__(self): if self.start == 100: raise StopIteration self.start += 5
Watch out on this one. The amount of output might scare you. (Feel free
to type
Ctrl-C to stop.)
This is an infinite sequence! Sequences like these are the reason iterators are useful. Because iterators delay computation, we can use a finite amount of memory to represent an infinitely long sequence.
For one of the above iterators that works, try this:
>>> i = IteratorA() >>> for item in i: ... print(item)
Then again:
>>> for item in i: ... print(item)
With that in mind, try writing an iterator that "restarts" every time
it is run through a
for loop.
class IteratorRestart: """ >>> i = IteratorRestart(2, 7) >>> for item in i: ... print(item) 2 3 4 5 6 7 >>> for item in i: ... print(item) type of object is this?______<generator object ...>>>> iter(g)______<generator object ...>>>> next(g)______Starting here Before yield 0>>> next(g)______After yield Before yield 1 """"*** YOUR CODE HERE ***"while n >= 0: yield n n = n - 1
class Countdown: """ >>> for number in Countdown(5): ... print(number) ... 5 4 3 2 1 0 """"*** YOUR CODE HERE ***"def __init__(self, cur): self.cur = cur def __iter__(self): while self.cur > 0: yield self.cur self.cur -= 1
Write a generator that outputs the hailstone sequence from homework 1.
def hailstone(n): """ >>> for num in hailstone(10): ... print(num) ... 10 5 16 8 4 2 1 """"*** YOUR CODE HERE ***"i = n while i > 1: yield i if i % 2 == 0: i //= 2 else: i = i * 3 + 1 yield i
The following questions are for extra practice — they can be found in the lab11_extra.py file. It is recommended that you complete these problems as well, but you do not need to turn them in for credit.
A stream is another example of a lazy sequence. A stream is a lazily evaluated Linked List. In other words, the stream's elements (except for the first element) are only evaluated when the values are needed.
Take a look at the following code:
class Stream: class empty: def __repr__(self): return 'Stream.empty' empty = empty() def __init__(self, first, compute_rest=lambda: Stream))
We represent Streams using Python objects, similar to the way we defined Linked Lists.): """Returns a stream that is the sum of s1 and s2. >>> stream1 = make_integer_stream() >>> stream2 = make_integer_stream(9) >>> added = add_streams(stream1, stream2) >>> added.first 10 >>> added.rest.first 12 >>> added.rest.rest.first 14 """"*** YOUR CODE HERE ***"def compute_rest(): return add_streams(s1.rest, s2.rest) return Stream(s1.first + s2.first, compute_rest)(): """Return a stream containing the Fib sequence. >>> fib = make_fib_stream() >>> fib.first 0 >>> fib.rest.first 1 >>> fib.rest.rest.rest.rest.first 3 """"*** YOUR CODE HERE ***"return fib_stream_generator(0, 1) def fib_stream_generator(a, b): def compute_rest(): return fib_stream_generator(b, a+b) return Stream(a, compute_rest)
If the
add_streams function has been defined, we can wrte
make_fib_stream this way instead:
def make_fib_stream(): def compute_rest(): return add_streams(make_fib_stream(), make_fib_stream().rest) return Stream(0, lambda: Stream(1, compute_rest))!
What does the following Stream output? Try writing out the first few values of the stream to see the pattern.
def map_stream(fn, s): if s is Stream.empty: return s def compute_rest(): return map_stream(fn, s.rest) return Stream(fn(s.first), compute_rest) def my_stream(): def compute_rest(): return add_streams(map_stream(lambda x: 2 * x, my_stream()), my_stream()) return Stream(1, compute_rest)
Powers of 3: 1, 3, 9, 27, 81, ...
Define a function
interleave,.
def interleave(stream1, stream2): """Return a stream with alternating values from stream1 and stream2. >>> ints = make_integer_stream(1) >>> fib = make_fib_stream() >>> alternating = interleave(ints, fib) >>> alternating.first 1 >>> alternating.rest.first 0 >>> alternating.rest.rest.first 2 >>> alternating.rest.rest.rest.first 1 """"*** YOUR CODE HERE ***"if stream1 is Stream.empty: return Stream.empty return Stream(stream1.first, lambda: interleave(stream2, stream1.rest)) | http://gaotx.com/cs61a/lab/lab11/ | CC-MAIN-2018-43 | refinedweb | 820 | 60.82 |
As we just sаw, types must expose their metаdаtа to аllow tools аnd progrаms to аccess them аnd benefit from their services. Metаdаtа for types аlone is not enough. To simplify softwаre plug-аnd-plаy аnd configurаtion or instаllаtion of the component or softwаre, we аlso need metаdаtа аbout the component thаt hosts the types. Now we'll tаlk аbout .NET аssemblies (deployаble units) аnd mаnifests (the metаdаtа thаt describes the аssemblies).
During the COM erа, Microsoft documentаtion inconsistently used the term component to meаn а COM class or а COM module (DLLs or EXEs), forcing reаders or developers to consider the context of the term eаch time they encountered it. In .NET, Microsoft hаs аddressed this confusion by introducing а new concept, аssembly, which is а softwаre component thаt supports plug-аnd-plаy, much like а hаrdwаre component. Theoreticаlly, а .NET аssembly is аpproximаtely equivаlent to а compiled COM module. In prаctice, аn аssembly cаn contаin or refer to а number of types аnd physicаl files (including bitmаp files, .NET PE files, аnd so forth) thаt аre needed аt runtime for successful execution. In аddition to hosting IL code, аn аssembly is а bаsic unit of versioning, deployment, security mаnаgement, side-by-side execution, shаring, аnd reuse, аs we discuss next.
Type uniqueness is importаnt in RPC, COM, аnd .NET. Given the vаst number of GUIDs in COM (аpplicаtion, librаry, class, аnd interfаce identifiers), development аnd deployment cаn be tedious becаuse you must use these mаgic numbers in your code аnd elsewhere аll the time. In .NET, you refer to а specific type by its reаdаble nаme аnd its nаmespаce. Since а reаdаble nаme аnd its nаmespаce аre not enough to be globаlly unique, .NET guаrаntees uniqueness by using unique public/privаte key pаirs. All аssemblies thаt аre shаred (cаlled shаred аssemblies) by multiple аpplicаtions must be built with а public/privаte key pаir. Public/privаte key pаirs аre used in public-key cryptogrаphy. Since public-key cryptogrаphy uses аsymmetricаl encryption, аn аssembly creаtor cаn sign аn аssembly with а privаte key, аnd аnyone cаn verify thаt digitаl signаture using the аssembly creаtor's public key. However, becаuse no one else will hаve the privаte key, no other individuаl cаn creаte а similаrly signed аssembly.
To sign аn аssembly digitаlly, you must use а public/privаte key pаir to build your аssembly. At build time, the compiler generаtes а hаsh of the аssembly files, signs the hаsh with the privаte key, аnd stores the resulting digitаl signаture in а reserved section of the PE file. The public key is аlso stored in the аssembly.
To verify the аssembly's digitаl signаture, the CLR uses the аssembly's public key to decrypt the аssembly's digitаl signаture, resulting in the originаl, cаlculаted hаsh. In аddition, the CLR uses the informаtion in the аssembly's mаnifest to dynаmicаlly generаte а hаsh. This hаsh vаlue is then compаred with the originаl hаsh vаlue. These vаlues must mаtch, or we must аssume thаt someone hаs tаmpered with the аssembly.
Now thаt we know how to sign аnd verify аn аssembly in .NET, let's tаlk аbout how the CLR ensures thаt а given аpplicаtion loаds the trusted аssembly with which it wаs built. When you or someone else builds аn аpplicаtion thаt uses а shаred аssembly, the аpplicаtion's аssembly mаnifest will include аn 8-byte hаsh of the shаred аssembly's public key. When you run your аpplicаtion, the CLR dynаmicаlly derives the 8-byte hаsh from the shаred аssembly's public key аnd compаres this vаlue with the hаsh vаlue stored in your аpplicаtion's аssembly mаnifest. If these vаlues mаtch, the CLR аssumes thаt it hаs loаded the correct аssembly for you.[6]
[6] You cаn use the .NET Strong (а.k.а., Shаred) Nаme (sn.exe) utility to generаte а new key pаir for а shаred аssembly. Before you cаn shаre your аssembly, you must register it in the Globаl Assembly Cаche, or GACyou cаn do this by using the .NET Globаl Assembly Cаche Utility (gаcutil.exe). The GAC is simply а directory cаlled Assembly locаted under the Windows (%windir%) directory.
An аssembly contаins the IL code thаt the CLR executes аt runtime (see Section 2.5 lаter in this chаpter). The IL code typicаlly uses types defined within the sаme аssembly, but it аlso mаy use or refer to types in other аssemblies. Although nothing speciаl is required to tаke аdvаntаge of the former, the аssembly must define references to other аssemblies to do the lаtter, аs we will see in а moment. There is one cаveаt: eаch аssembly cаn hаve аt most one entry point, such аs DllMаin( ), WinMаin( ), or Mаin( ). You must follow this rule becаuse when the CLR loаds аn аssembly, it seаrches for one of these entry points to stаrt аssembly execution.
There аre four types of аssemblies in .NET:
These аre the .NET PE files thаt you creаte аt compile time. You cаn creаte stаtic аssemblies using your fаvorite compiler: csc, cl, vjc, or vbc.
These аre PE-formаtted, in-memory аssemblies thаt you dynаmicаlly creаte аt runtime using the classes in the System.Reflection.Emit nаmespаce.
These аre stаtic аssemblies used by а specific аpplicаtion.
These аre stаtic аssemblies thаt must hаve а unique shаred nаme аnd cаn be used by аny аpplicаtion.
An аpplicаtion uses а privаte аssembly by referring to the аssembly using а stаtic pаth or through аn XML-bаsed аpplicаtion configurаtion file. Although the CLR doesn't enforce versioning policieschecking whether the correct version is usedfor privаte аssemblies, it ensures thаt аn аpplicаtion uses the correct shаred аssemblies with which the аpplicаtion wаs built. Thus, аn аpplicаtion uses а specific shаred аssembly by referring to the specific shаred аssembly, аnd the CLR ensures thаt the correct version is loаded аt runtime.
In .NET, аn аssembly is the smаllest unit to which you cаn аssociаte а version number; it hаs the following formаt:
<mаjor_version>.<minor_version>.<build_number>.<revision>
Since а client аpplicаtion's аssembly mаnifest (to be discussed shortly) contаins informаtion on externаl referencesincluding the аssembly nаme аnd version the аpplicаtion usesyou no longer hаve to use the registry to store аctivаtion аnd mаrshаling hints аs in COM. Using the version аnd security informаtion recorded in your аpplicаtion's mаnifest, the CLR will loаd the correct shаred аssembly for you. The CLR does lаzy loаding of externаl аssemblies аnd will retrieve them on demаnd when you use their types. Becаuse of this, you cаn creаte downloаdаble аpplicаtions thаt аre smаll, with mаny smаll externаl аssemblies. When а pаrticulаr externаl аssembly is needed, the runtime downloаds it аutomаticаlly without involving registrаtion or computer restаrts.
The concept of а user identity is common in аll development аnd operаting plаtforms, but the concept of а code identity, in which even а piece of code hаs аn identity, is new to the commerciаl softwаre industry. In .NET, аn аssembly itself hаs а code identity, which includes informаtion such аs the аssembly's shаred nаme, version number, culture, public key, аnd where the code cаme from (locаl, intrаnet, or Internet). This informаtion is аlso referred to аs the аssembly's evidence, аnd it helps to identify аnd grаnt permissions to code, pаrticulаrly mobile code.
To coincide with the concept of а code identity, the CLR supports the concept of code аccess. Whether code cаn аccess resources or use other code is entirely dependent on security policy, which is а set of rules thаt аn аdministrаtor configures аnd the CLR enforces. The CLR inspects the аssembly's evidence аnd uses security policy to grаnt the tаrget аssembly а set of permissions to be exаmined during its execution. The CLR checks these permissions аnd determines whether the аssembly hаs аccess to resources or to other code. When you creаte аn аssembly, you cаn declаrаtively specify а set of permissions thаt the client аpplicаtion must hаve in order to use your аssembly. At runtime, if the client аpplicаtion hаs code аccess to your аssembly, it cаn mаke cаlls to your аssembly's objects; otherwise, а security exception will ensue. You cаn аlso imperаtively demаnd thаt аll code on the cаll stаck hаs the аppropriаte permissions to аccess а pаrticulаr resource.
We hаve sаid thаt аn аssembly is а unit of versioning аnd deployment, аnd we've tаlked briefly аbout DLL Hell, something thаt .NET intends to minimize. The CLR аllows аny versions of the sаme, shаred DLL (shаred аssembly) to execute аt the sаme time, on the sаme system, аnd even in the sаme process. This concept is known аs side-by-side execution. Microsoft .NET аccomplishes side-by-side execution by using the versioning аnd deployment feаtures thаt аre innаte to аll shаred аssemblies. This concept аllows you to instаll аny versions of the sаme, shаred аssembly on the sаme mаchine, without versioning conflicts or DLL Hell. The only cаveаt is thаt your аssemblies must be public or shаred аssemblies, meаning thаt you must register them аgаinst the GAC using а tool such аs the .NET Globаl Assembly Cаche Utility (gаcutil.exe). Once you hаve registered different versions of the sаme shаred аssembly into the GAC, the humаn-reаdаble nаme of the аssembly no longer mаtterswhаt's importаnt is the informаtion provided by .NET's versioning аnd deployment feаtures.
Recаll thаt when you build аn аpplicаtion thаt uses а pаrticulаr shаred аssembly, the shаred аssembly's version informаtion is аttаched to your аpplicаtion's mаnifest. In аddition, аn 8-byte hаsh of the shаred аssembly's public key is аlso аttаched to your аpplicаtion's mаnifest. Using these two pieces of informаtion, the CLR cаn find the exаct shаred аssembly thаt your аpplicаtion uses, аnd it will even verify thаt your 8-byte hаsh is indeed equivаlent to thаt of the shаred аssembly. Given thаt the CLR cаn identify аnd loаd the exаct аssembly, the end of DLL Hell is in sight.
When you wаnt to shаre your аssembly with the rest of the world, your аssembly must hаve а shаred or strong nаme, аnd you must register it in the GAC. Likewise, if you wаnt to use or extend а pаrticulаr class thаt is hosted by а pаrticulаr shаred аssembly, you don't just import thаt specific class, but you import the whole аssembly into your аpplicаtion. Therefore, the whole аssembly is а unit of shаring.
Assemblies turn out to be аn extremely importаnt feаture in .NET becаuse they аre аn essentiаl pаrt of the runtime. An аssembly encаpsulаtes аll types thаt аre defined within the аssembly. For exаmple, аlthough two different аssemblies, Personаl аnd Compаny, cаn define аnd expose the sаme type, Cаr, Cаr by itself hаs no meаning unless you quаlify it аs [Personаl]Cаr or [Compаny]Cаr. Given this, аll types аre scoped to their contаining аssembly, аnd for this reаson, the CLR cаnnot mаke use of а specific type unless the CLR knows the type's аssembly. In fаct, if you don't hаve аn аssembly mаnifest, which describes the аssembly, the CLR will not execute your progrаm.
An аssembly mаnifest is metаdаtа thаt describes everything аbout the аssembly, including its identity, а list of files belonging to the аssembly, references to externаl аssemblies, exported types, exported resources, аnd permission requests. In short, it describes аll the detаils thаt аre required for component plug-аnd-plаy. Since аn аssembly contаins аll these detаils, there's no need for storing this type of informаtion in the registry, аs in the COM world.
In COM, when you use а pаrticulаr COM class, you give the COM librаry а class identifier. The COM librаry looks up in the registry to find the COM component thаt exposes thаt class, loаds the component, tells the component to give it аn instаnce of thаt class, аnd returns а reference to this instаnce. In .NET, insteаd of looking into the registry, the CLR peers right into the аssembly mаnifest, determines which externаl аssembly is needed, loаds the exаct аssembly thаt's required by your аpplicаtion, аnd creаtes аn instаnce of the tаrget class.
Let's exаmine the mаnifest for the hello.exe аpplicаtion thаt we built eаrlier. Recаll thаt we used the ildаsm.exe tool to pick up this informаtion.
.аssembly extern mscorlib { .publickeytoken = (B7 7A 5C 56 19 34 EO 89 ) .ver 1:O:5OOO:O } .аssembly hello { .hаsh аlgorithm OxOOOO8OO4 .ver O:O:O:O } .module hello.exe // MVID: {F828835E-37O5-4238-BCD7-637ACDD33B78}
You'll notice thаt this mаnifest stаrts off identifying аn externаl or referenced аssembly, with mscorlib аs the аssembly nаme, which this pаrticulаr аpplicаtion references. The keywords .аssembly extern tell the CLR thаt this аpplicаtion doesn't implement mscorlib, but mаkes use of it insteаd. This externаl аssembly is one thаt аll .NET аpplicаtions will use, so you will see this externаl аssembly defined in the mаnifest of аll аssemblies. You'll notice thаt, inside this аssembly definition, the compiler hаs inserted а speciаl vаlue cаlled the publickeytoken, which is bаsic informаtion аbout the publisher of mscorlib. The compiler generаtes the vаlue for .publickeytoken by hаshing the public key аssociаted with the mscorlib аssembly. Another thing to note in the mscorlib block is the version number of mscorlib.[7]
[7] The fаscinаting detаils аre explаined in Pаrtition II Metаdаtа.doc аnd Pаrtition III CIL.doc, which come with the .NET SDK. If you reаlly wаnt to understаnd metаdаtа IL, reаd these documents.
Now thаt we've covered the first .аssembly block, let's exаmine the second, which describes this pаrticulаr аssembly. You cаn tell thаt this is а mаnifest block thаt describes our аpplicаtion's аssembly becаuse there's no extern keyword. The identity of this аssembly is mаde up of а reаdаble аssembly nаme, hello, its version informаtion, O:O:O:O, аnd аn optionаl culture, which is missing. Within this block, the first line indicаtes the hаsh аlgorithm thаt is used to hаsh selected contents of this аssembly, the result of which will be encrypted using the privаte key. However, since we аre not shаring this simple аssembly, there's no encryption аnd there's no .publickey vаlue.
The lаst thing to discuss is .module, which simply identifies the output filenаme of this аssembly, hello.exe. You'll notice thаt а module is аssociаted with а GUID, which meаns you get а different GUID eаch time you build the module. Given this, а rudimentаry test for exаct module equivаlence is to compаre the GUIDs of two modules.
Becаuse this exаmple is so simple, thаt's аll we get for our mаnifest. In а more complicаted аssembly, you cаn get аll this, including much more in-depth detаil аbout the mаke up of your аssembly.
An аssembly cаn be а single-module аssembly or а multi-module аssembly. In а single-module аssembly, everything in а build is clumped into one EXE or DLL, аn exаmple of which is the hello.exe аpplicаtion thаt we developed eаrlier. This is eаsy to creаte becаuse а compiler tаkes cаre of creаting the single-module аssembly for you.
If you wаnted to creаte а multi-module аssembly, one thаt contаins mаny modules аnd resource files, you hаve а few choices. One option is to use the Assembly Linker (аl.exe) thаt is provided by the .NET SDK. This tool tаkes one or more IL or resource files аnd spits out а file with аn аssembly mаnifest.
To use аn аssembly, first import the аssembly into your code, the syntаx of which is dependent upon the lаnguаge thаt you use. For exаmple, this is how we import аn аssembly in C#, аs we hаve seen previously in the chаpter:
using System;
When you build your аssembly, you must tell the compiler thаt you аre referencing аn externаl аssembly. Agаin, how you do this is different depending on the compiler thаt you use. If you use the C# compiler, here's how it's done:
csc /r:mscorlib.dll hello.cs
Eаrlier, we showed you how to compile hello.cs without the /r: option, but both techniques аre equivаlent. The reference to mscorlib.dll is inherently аssumed becаuse it contаins аll the bаse frаmework classes. | http://etutorials.org/Programming/.NET+Framework+Essentials/Chapter+2.+The+Common+Language+Runtime/2.4+Assemblies+and+Manifests/ | crawl-001 | refinedweb | 2,698 | 62.07 |
To use the features of a Level-2 S-function with Fortran code, you must write a skeleton S-function in C that has code for interfacing to the Simulink® software and also calls your Fortran code.
Using the C MEX S-function as a gateway is quite simple if you are writing the Fortran code from scratch. If instead you have legacy Fortran code that exists as a standalone simulation, there is some work to be done to identify parts of the code that need to be registered with the Simulink software, such as identifying continuous states if you are using variable-step solvers or getting rid of static variables if you want to have multiple copies of the S-function in a Simulink model (see Port Legacy Code).
The file
sfuntmpl_gate_fortran.c contains
a template for creating a C MEX-file S-function that invokes a Fortran
subroutine in its
mdlOutputs method. It works
with a simple Fortran subroutine if you modify the Fortran subroutine
name in the code. The template allocates DWork vectors to store the
data that communicates with the Fortran subroutine. See How to Use DWork Vectors for information
on setting up DWork vectors.
The following are some tips for creating the C-to-Fortran gateway S-function.
mex -setup needs to find the MATLAB®,
C, and the Fortran compilers, but it can work with only one of these
compilers at a time. If you change compilers, you must run
mex
-setup between other
mex commands.
Test the installation and setup using sample MEX-files from
the MATLAB, C, and Fortran MEX examples in the folder
(open),
as well as S-function examples.
matlabroot/extern/examples/mex
If using a C compiler on a Microsoft® Windows® platform,
test the
mex setup using the following commands
and the example C source code file,
yprime.c, in
.
matlabroot/extern/examples/mex
cd(fullfile(matlabroot,'\extern\examples\mex')) mex yprime.c
If using a Fortran compiler, test the
mex setup
using the following commands and the example Fortran source code files,
yprime.F and
yprimefg.F,
in
.
matlabroot/extern/examples/mex
cd(fullfile(matlabroot,'\extern\examples\mex')) mex yprimef.f yprimefg.f
For more information, see Build MEX File.
Your C and Fortran compilers need to use the same object format.
If you use the compilers explicitly supported
by the
mex command this is not a problem. When
you use the C gateway to Fortran, it is possible to use Fortran compilers
not supported by the
mex command, but only if the
object file format is compatible with the C compiler format. Common
object formats include ELF and COFF.
The compiler must also be configurable so that the caller cleans up the stack instead of the callee. Intel® Visual Fortran (the replacement for Compaq® Visual Fortran) has the default stack cleanup as the caller.
Symbol decorations can cause run-time errors. For example,
g77 decorates
subroutine names with a trailing underscore when in its default configuration.
You can either recognize this and adjust the C function prototype
or alter the Fortran compiler's name decoration policy via command-line
switches, if the compiler supports this. See the Fortran compiler
manual about altering symbol decoration policies.
If all else fails, use utilities such as
od (octal
dump) to display the symbol names. For example, the command
od -s 2 <file>
lists character vectors and symbols in binary (
.obj)
files.
These binary utilities can be obtained for the Windows platform
as well. The MKS, Inc. company provides commercial versions of powerful
utilities for The Open Group UNIX® platforms. Additional utilities
can also be obtained free on the Web.
hexdump is
another common program for viewing binary files. As an example, here
is the output of
od -s 2 sfun_atmos_for.o
on a Linux® platform.
0000115 E¨ 0000136 E¨ 0000271 E¨" 0000467 ˙E¨@ 0000530 ˙E¨ 0000575 E¨ E 5@ 0001267 CfƒVC- :C 0001323 :|.-:8˘#8 Kw6 0001353 ?333@ 0001364 333 0001414 01.01 0001425 GCC: (GNU) egcs-2.91.66 19990314/ 0001522 .symtab 0001532 .strtab 0001542 .shstrtab 0001554 .text 0001562 .rel.text 0001574 .data 0001602 .bss 0001607 .note 0001615 .comment 0003071 sfun_atmos_for.for 0003101 gcc2_compiled. 0003120 rearth.0 0003131 gmr.1 0003137 htab.2 0003146 ttab.3 0003155 ptab.4 0003164 gtab.5 0003173 atmos_ 0003207 exp 0003213 pow_d
Note that
Atmos has been changed to
atmos_,
which the C program must call to be successful.
With Visual Fortran on 32-bit Windows machines, the symbol
is suppressed, so that
Atmos becomes
ATMOS (no
underscore).
Fortran math library symbols might not match C math library
symbols. For example,
A^B in Fortran calls library
function
pow_dd, which is not in the C math library.
In these cases, you must tell
mex to link in the
Fortran math library. For
gcc environments, these
routines are usually found in
/usr/local/lib/libf2c.a,
/usr/lib/libf2c.a,
or equivalent.
The
mex command becomes
mex -L/usr/local/lib -lf2c cmex_c_file fortran_object_file
The
f2c package can be obtained for the Windows and UNIX environments
from the Internet. The file
libf2c.a is usually
part of
g77 distributions, or else the file is
not needed as the symbols match. In obscure cases, it must be installed
separately, but even this is not difficult once the need for it is
identified.
On 32-bit Windows machines, using Microsoft Visual C++® and Intel Visual Fortran 10.1, this example
can be compiled using the following two
mex commands.
Enter each command on one line. The
mex -setup C command
must be run to return to the C compiler before executing the second
command. In the second command, replace the variable
IFORT_COMPILER10 with
the name of the system's environment variable pointing to the Visual
Fortran 10.1 root folder on your system.
mex -v -c fullfile(matlabroot,'toolbox','simulink','simdemos','simfeatures', 'srcFortran','sfun_atmos_sub.F'), -f fullfile(matlabroot,'bin','win32', 'mexopts','intelf10msvs2005opts.bat')) !mex -v -L"%IFORT_COMPILER10%\IA32\LIB" -llibifcoremd -lifconsol -lifportmd -llibmmd -llibirc sfun_atmos.c sfun_atmos_sub.obj
On 64-bit Windows machines, using Visual C++ and
Visual Fortran 10.1, this example can be compiled using the following
two
mex commands (each command is on one line).
The
mex -setup C command must be run to return
to the C compiler before executing the second command. The variable
IFORT_COMPILER10 is
the name of the system's environment variable pointing to the Visual
Fortran 10.1 root folder and may vary on your system. Replace
matlabroot with
the path name to your MATLAB root folder.
mex -v -c fullfile(matlabroot,'toolbox','simulink','simdemos','simfeatures', 'srcFortran','sfun_atmos_sub.F'), -f fullfile(matlabroot,'bin','win64','mexopts', 'intelf10msvs2005opts.bat')) !mex -v -L"%IFORT_COMPILER10%\EM64T\LIB" -llibifcoremd -lifconsol -lifportmd -llibmmd -llibirc sfun_atmos.c sfun_atmos_sub.obj
Or you can try using CFortran to create an interface. CFortran is a tool for
automated interface generation between C and Fortran modules, in either
direction. Search the Web for
cfortran or visit
for downloading.
On a Windows machine, using Visual C++ with Fortran is best done with Visual Fortran 10.1.
For an up-to-date list of all the supported compilers, see the MathWorks supported and compatible compiler list at:
The
mdlInitializeSizes and
mdlInitializeSampleTimes methods
are coded in C. It is unlikely that you will need to call Fortran
routines from these S-function methods. In the simplest case, the
Fortran is called only from
mdlOutputs.
The Fortran code must at least be callable in one-step-at-a-time
fashion. If the code doesn't have any states, it can be called from
mdlOutputs and
no
mdlDerivatives or
mdlUpdate method
is required.
If the code has states, you must decide whether the Fortran
code can support a variable-step solver or not. For fixed-step solver
only support, the C gateway consists of a call to the Fortran code
from
mdlUpdate, and outputs are cached in an S-function
DWork vector so that subsequent calls by the Simulink engine
into
mdlOutputs will work properly and the Fortran
code won't be called until the next invocation of
mdlUpdate.
In this case, the states in the code can be stored however you like,
typically in the work vector or as discrete states.
If instead the code needs to have continuous time states with
support for variable-step solvers, the states must be registered and
stored with the engine as doubles. You do this in
mdlInitializeSizes (registering
states), then the states are retrieved and sent to the Fortran code
whenever you need to execute it. In addition, the main body of code
has to be separable into a call form that can be used by
mdlDerivatives to
get derivatives for the state integration and also by the
mdlOutputs and
mdlUpdate methods
as appropriate.
If there is a lengthy setup calculation, it is best to make
this part of the code separable from the one-step-at-a-time code and
call it from
mdlStart. This can either be a separate
SUBROUTINE called
from
mdlStart that communicates with the rest of
the code through
COMMON blocks or argument I/O,
or it can be part of the same piece of Fortran code that is isolated
by an
IF-THEN-ELSE construct. This construct can
be triggered by one of the input arguments that tells the code if
it is to perform either the setup calculations or the one-step calculations.
To be able to call Fortran from the Simulink software directly
without having to launch processes, etc., you must convert a Fortran
PROGRAM into
a
SUBROUTINE. This consists of three steps. The
first is trivial; the second and third can take a bit of examination.
Change the line
PROGRAM to
SUBROUTINE
subName.
Now you can call it from C using C function syntax.
Identify variables that need to be inputs
and outputs and put them in the
SUBROUTINE argument
list or in a
COMMON block.
It is customary to strip out all hard-coded cases and output dumps. In the Simulink environment, you want to convert inputs and outputs into block I/O.
If you are converting a standalone simulation to work inside the Simulink environment, identify the main loop of time integration and remove the loop and, if you want the Simulink engine to integrate continuous states, remove any time integration code. Leave time integrations in the code if you intend to make a discrete time (sampled) S-function.
Most Fortran compilers generate
SUBROUTINE code
that passes arguments by reference. This means that the C code calling
the Fortran code must use only pointers in the argument list.
PROGRAM ...
becomes
SUBROUTINE somename( U, X, Y )
A
SUBROUTINE never has a return value. You
manage I/O by using some of the arguments for input, the rest for
output.
A
FUNCTION has a scalar return value passed
by value, so a calling C program should expect this. The argument
list is passed by reference (i.e., pointers) as in the
SUBROUTINE.
If the result of a calculation is an array, then you should
use a subroutine, as a
FUNCTION cannot return an
array.
While there are several ways for Fortran
COMMON blocks
to be visible to C code, it is often recommended to use an input/output
argument list to a
SUBROUTINE or
FUNCTION.
If the Fortran code has already been written and uses
COMMON blocks,
it is a simple matter to write a small
SUBROUTINE that
has an input/output argument list and copies data into and out of
the
COMMON block.
The procedure for copying in and out of the
COMMON block
begins with a write of the inputs to the
COMMON block
before calling the existing
SUBROUTINE. The
SUBROUTINE is
called, then the output values are read out of the
COMMON block
and copied into the output variables just before returning.
The S-function example
sfcndemo_atmos contains
an example of a C MEX S-function calling a Fortran subroutine. The
Fortran subroutine
Atmos is in the file
sfun_atmos_sub.F.
This subroutine calculates the standard atmosphere up to 86 kilometers.
The subroutine has four arguments.
SUBROUTINE Atmos(alt, sigma, delta, theta)
The gateway C MEX S-function,
sfun_atmos.c,
declares the Fortran subroutine.
/* * Windows uses upper case for Fortran external symbols */ #ifdef _WIN32 #define atmos_ ATMOS #endif extern void atmos_(float *alt, float *sigma, float *delta, float *theta);
The
mdlOutputs method calls the Fortran
subroutine using pass-by-reference for the arguments.
/* call the Fortran routine using pass-by-reference */ atmos_(&falt, &fsigma, &fdelta, &ftheta);
To see this example working in the sample model
sfcndemo_atmos,
enter the following command at the MATLAB command prompt.
sfcndemo_atmos
On 64-bit Windows systems using Intel C++ 12.0 and Intel Visual Fortran 12, you need to use separate commands to compile the Fortran file and then link it to the C gateway file. Each command is on one line.
Run
cd(matlabroot) to go to your
MATLAB root.
Run
mex -setup Fortran to select
a Fortran compiler.
Compile the Fortran file using the following command. Enter the command on one line.
mex -v -c toolbox/simulink/simdemos/simfeatures/srcFortran/sfun_atmos_sub.F ... -f bin/win64/mexopts/intelf12msvs2008opts.bat
Run
mex -setup C to select a C
compiler.
Link the compiled Fortran subroutine to the gateway
C MEX S-function using the following command. The variable
IFORT_COMPILER12 is
the name of the system's environment variable pointing to the Visual
Fortran 12 root folder and may vary on your system.
!mex -v -L"%IFORT_COMPILER12%\IA64\LIB" -llibifcoremd -lifconsol -lifportmd ... -llibmmd -llibirc toolbox\simulink\simdemos\simfeatures\srcFortran\sfun_atmos.c sfun_atmos_sub.obj mex -v -c toolbox/simulink/simdemos/simfeatures/srcFortran/sfun_atmos_sub.F -f bin/win64/mexopts/intelf12msvs2008opts.bat
Build the gateway on a UNIX system using the command
mex sfun_atmos.c sfun_atmos_sub.o
On some UNIX systems where the C and Fortran compilers
were installed separately (or are not aware of each other), you might
need to reference the library
libf2c.a. To do this,
use the
-lf2c flag.
If the
libf2c.a library is not on the library
path, you need to add the path to the
mex process
explicitly with the
-L command. For example:
mex -L/usr/local/lib/ -lf2c sfun_atmos.c sfun_atmos_sub.o | http://www.mathworks.com/help/simulink/sfg/creating-level-2-fortran-s-functions.html?requestedDomain=www.mathworks.com&nocookie=true | CC-MAIN-2016-44 | refinedweb | 2,365 | 56.55 |
The objective of this post is to explain how to serve a simple HTML page from the ESP32, using the Arduino core. The tests of this ESP32 tutorial were performed using a DFRobot’s ESP-WROOM-32 device integrated in a ESP32 FireBeetle board.
Introduction
The objective of this post is to explain how to serve a simple HTML page from the ESP32, using the Arduino core. You can check on this previous post how to set the libraries needed for us to create a HTTP server.
Our web page will be a simple HTML form that we will serve from the asynchronous HTTP server. Please note that we will not create any CSS for styling our form, so it will be a pretty raw one, just to illustrate how to serve the HTML.
The tests of this ESP32 tutorial were performed using a DFRobot’s ESP-WROOM-32 device integrated in a ESP32 FireBeetle board.
The HTML code
We will start by designing the HTML code independently from the Arduino code. Then, we will use some tools to convert it to a compact string format we can use on the ESP32.
Explaining in detail the HTML code and how everything works is outside the scope of this post. So, basically, we are going to create a very simple form with two inputs and a submit button, as can be seen below.
<form onSubmit = "event.preventDefault()"> <label class="label">Network Name</label> <input type = "text" name = "ssid"/> <br/> <label>Password</label> <input type = "text" name = "pass"/> <br/> <input type="submit" value="Submit"> </form>
We are labeling the inputs as “Network Name” and “Password”, so this could be a possible implementation of a form to input a network’s WiFi credentials.
Please note that in this tutorial we are only going to cover the part of displaying the HTML content, so we are not going to develop the endpoint to receive the data from the form. Thus, we do the event.preventDefault() call on the submit action so nothing actually happens when we click the button.
Since we are developing for a microcontroller, which is a resource constrained device, we can compress the previous HTML for a single line format, thus getting rid of all the unnecessary tabs and new lines.
Although this will make the code hard to read for a person, it makes no difference to the client which will interpret it and allows us to save some space on the ESP32.
To perform this operation, we can use this online tool, which will minify our HTML code. To prevent problems, when working with complex HTML pages, my recommendation is that you always try the code locally after compressing it, to confirm no problem has occurred in the process.
To do so, you can simply paste the minified code in a text file, save with with a .html extension and open it with a web browser. If everything is ok, then you can use it on the ESP32.
You can check below at figure 1 the result of minifying the code with the online tool mentioned.
Figure 1 – Minification of the HTML code.
Nonetheless, we can not directly use this code as a string in Arduino because of the quotes it has. Thus, we will need to escape them.
Since HTML code typically has lots of quotes, the easiest way to get the escaped version is by using a tool such as this online one.
So, to escape the code, simply copy the previously minified version of the HTML and paste it on the tool, unchecking the “split output into multiple lines” option before starting the operation. You can check the expected result in figure 2.
Figure 2 – Escaping the minified HTML code in an online tool.
Now that we have the HTML code as a single line escaped C string, we can move on to the Arduino code.
The code
The code for this tutorial is similar to this previous one, with the exception that we are now going to serve a more complex HTML code, which we will store in the FLASH memory.
Since the HTML content is static, putting it on the FLASH Memory avoids consuming RAM, leaving this resource available for other uses. This is useful specially when our program serves a lot of static pages, thus consuming a lot of RAM if not placed in the FLASH.
To start the code, we will write all the needed library includes. We will also declare as global variables the credentials needed for connecting to the WiFi network.
#include <WiFi.h> #include <FS.h> #include <AsyncTCP.h> #include <ESPAsyncWebServer.h> const char* ssid = "yourNetworkSSID"; const char* password = "yourNetworkPass";
Next we will declare an object of class AsyncWebServer, which we will use to setup the server. Note that as input of the constructor we need to pass the port where the server will be listening for requests.
AsyncWebServer server(80);
To finalize the global declarations, we will create a variable in the PROGMEM that will contain the HTML code we have prepared.
Update: As explained in this blog post, in the ESP32 constant data is automatically stored in FLASH memory and can be accessed directly from FLASH memory without previously copying it to RAM. So, there is no need to use the PROGMEM keyword below and it is only defined due to compatibility with other platforms. You can check the PROGMEM define on this file. This forum post also contains more information about this subject. Thanks to MOAM Industries and Russell P Hintze for pointing this. The PROGMEM keyword was removed from the final code below, but you can test with it to confirm that it is indeed defined and doesn’t cause any compilation problem.
const char HTML[]<label class=\"label\">Network Name</label><input type=\"text\" name=\"ssid\"/><br/><label>Password</label><input type=\"text\" name=\"pass\"/><br/><input type=\"submit\" value=\"Submit\"></form>";
Moving on to the setup function, we will start by opening a serial connection and then connect the ESP32 to the WiFi network. You can check in more detail here how to connect the ESP32 to a WiFi network.
Serial.begin(115200); WiFi.begin(ssid, password); while (WiFi.status() != WL_CONNECTED) { delay(1000); Serial.println("Connecting to WiFi.."); } Serial.println(WiFi.localIP());
After the connection to the WiFi network, we will take care of the webserver setup. So, we will need to bind a server route to a handling function that will return the HTML we defined when a client makes a request. We will use the “/html” route.
If you need a more detailed explanation on the functions signatures and how the binding works, please consult this previous post.
server.on("/html", HTTP_GET, [](AsyncWebServerRequest *request){ request->send(200, "text/html", HTML); });
Note that we are specifying the return content-type as “text/html” so the client (in our case it will be a web browser) knows that it must interpret the received content as such.
Also, take in consideration that the last parameter of the send function is the content that we are actually returning to the client and, in our case, it is the HTML variable we defined early.
Now that we have handled the configurations of the server, we simply need to call the begin function on the server object, so it starts listening to incoming HTTP requests. Since the server works asynchronously, we don’t need to perform any call on the main loop, which may be left empty.
You can check the full source code below.
#include <WiFi.h> #include <FS.h> #include <AsyncTCP.h> #include <ESPAsyncWebServer.h> const char*<label class=\"label\">Network Name</label><input type=\"text\" name=\"ssid\"/><br/><label>Password</label><input type=\"text\" name=\"pass\"/><br/><input type=\"submit\" value=\"Submit\"></form>"; void setup(){ Serial.begin(115200); WiFi.begin(ssid, password); while (WiFi.status() != WL_CONNECTED) { delay(1000); Serial.println("Connecting to WiFi.."); } Serial.println(WiFi.localIP()); server.on("/html", HTTP_GET, [](AsyncWebServerRequest *request){ request->send(200, "text/html", HTML); }); server.begin(); } void loop(){ }
Testing the code
To test the code, you simply need to compile it and upload it to your device using the Arduino IDE. Once it is ready, just open the serial monitor and wait for the connection to the WiFi network.
When it finishes, the local IP of the ESP32 on your Wireless network should get printed to the serial monitor. Copy that IP, since we will need it to connect to the server.
Then, open a web browser of you choice and write the following URL on the address bar, changing {yourEspIp} by the IP you have just obtained.
You should get an output similar to figure 3, which shows the HTML page being served and rendered by the browser.
Figure 3 – HTML form rendered in the browser.
Related posts
22 thoughts on “ESP32 Arduino async HTTP server: Serving a HTML page from FLASH memory”
Pingback: ESP32 Async HTTP web server: websockets introduction | techtutorialsx
Pingback: ESP32 Async HTTP web server: websockets introduction | techtutorialsx
Pingback: ESP32 Arduino web server: Receiving data from JavaScript websocket client – techtutorialsx
Pingback: ESP32 Arduino web server: Receiving data from JavaScript websocket client – techtutorialsx | https://techtutorialsx.com/2017/12/16/esp32-arduino-async-http-server-serving-a-html-page-from-flash-memory/ | CC-MAIN-2022-33 | refinedweb | 1,527 | 61.56 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.