doc_content
stringlengths
1
386k
doc_id
stringlengths
5
188
get_paginate_by(queryset) Returns the number of items to paginate by, or None for no pagination. By default this returns the value of paginate_by.
django.ref.class-based-views.mixins-multiple-object#django.views.generic.list.MultipleObjectMixin.get_paginate_by
get_paginate_orphans() An integer specifying the number of “overflow” objects the last page can contain. By default this returns the value of paginate_orphans.
django.ref.class-based-views.mixins-multiple-object#django.views.generic.list.MultipleObjectMixin.get_paginate_orphans
get_paginator(queryset, per_page, orphans=0, allow_empty_first_page=True) Returns an instance of the paginator to use for this view. By default, instantiates an instance of paginator_class.
django.ref.class-based-views.mixins-multiple-object#django.views.generic.list.MultipleObjectMixin.get_paginator
get_queryset() Get the list of items for this view. This must be an iterable and may be a queryset (in which queryset-specific behavior will be enabled).
django.ref.class-based-views.mixins-multiple-object#django.views.generic.list.MultipleObjectMixin.get_queryset
model The model that this view will display data for. Specifying model = Foo is effectively the same as specifying queryset = Foo.objects.all(), where objects stands for Foo’s default manager.
django.ref.class-based-views.mixins-multiple-object#django.views.generic.list.MultipleObjectMixin.model
ordering A string or list of strings specifying the ordering to apply to the queryset. Valid values are the same as those for order_by().
django.ref.class-based-views.mixins-multiple-object#django.views.generic.list.MultipleObjectMixin.ordering
page_kwarg A string specifying the name to use for the page parameter. The view will expect this parameter to be available either as a query string parameter (via request.GET) or as a kwarg variable specified in the URLconf. Defaults to page.
django.ref.class-based-views.mixins-multiple-object#django.views.generic.list.MultipleObjectMixin.page_kwarg
paginate_by An integer specifying how many objects should be displayed per page. If this is given, the view will paginate objects with paginate_by objects per page. The view will expect either a page query string parameter (via request.GET) or a page variable specified in the URLconf.
django.ref.class-based-views.mixins-multiple-object#django.views.generic.list.MultipleObjectMixin.paginate_by
paginate_orphans An integer specifying the number of “overflow” objects the last page can contain. This extends the paginate_by limit on the last page by up to paginate_orphans, in order to keep the last page from having a very small number of objects.
django.ref.class-based-views.mixins-multiple-object#django.views.generic.list.MultipleObjectMixin.paginate_orphans
paginate_queryset(queryset, page_size) Returns a 4-tuple containing (paginator, page, object_list, is_paginated). Constructed by paginating queryset into pages of size page_size. If the request contains a page argument, either as a captured URL argument or as a GET argument, object_list will correspond to the objects from that page.
django.ref.class-based-views.mixins-multiple-object#django.views.generic.list.MultipleObjectMixin.paginate_queryset
paginator_class The paginator class to be used for pagination. By default, django.core.paginator.Paginator is used. If the custom paginator class doesn’t have the same constructor interface as django.core.paginator.Paginator, you will also need to provide an implementation for get_paginator().
django.ref.class-based-views.mixins-multiple-object#django.views.generic.list.MultipleObjectMixin.paginator_class
queryset A QuerySet that represents the objects. If provided, the value of queryset supersedes the value provided for model. Warning queryset is a class attribute with a mutable value so care must be taken when using it directly. Before using it, either call its all() method or retrieve it with get_queryset() which takes care of the cloning behind the scenes.
django.ref.class-based-views.mixins-multiple-object#django.views.generic.list.MultipleObjectMixin.queryset
class django.views.generic.list.MultipleObjectTemplateResponseMixin A mixin class that performs template-based response rendering for views that operate upon a list of object instances. Requires that the view it is mixed with provides self.object_list, the list of object instances that the view is operating on. self.object_list may be, but is not required to be, a QuerySet. Extends TemplateResponseMixin Methods and Attributes template_name_suffix The suffix to append to the auto-generated candidate template name. Default suffix is _list. get_template_names() Returns a list of candidate template names. Returns the following list: the value of template_name on the view (if provided) <app_label>/<model_name><template_name_suffix>.html
django.ref.class-based-views.mixins-multiple-object#django.views.generic.list.MultipleObjectTemplateResponseMixin
get_template_names() Returns a list of candidate template names. Returns the following list: the value of template_name on the view (if provided) <app_label>/<model_name><template_name_suffix>.html
django.ref.class-based-views.mixins-multiple-object#django.views.generic.list.MultipleObjectTemplateResponseMixin.get_template_names
template_name_suffix The suffix to append to the auto-generated candidate template name. Default suffix is _list.
django.ref.class-based-views.mixins-multiple-object#django.views.generic.list.MultipleObjectTemplateResponseMixin.template_name_suffix
class JavaScriptCatalog A view that produces a JavaScript code library with functions that mimic the gettext interface, plus an array of translation strings. Attributes domain Translation domain containing strings to add in the view output. Defaults to 'djangojs'. packages A list of application names among installed applications. Those apps should contain a locale directory. All those catalogs plus all catalogs found in LOCALE_PATHS (which are always included) are merged into one catalog. Defaults to None, which means that all available translations from all INSTALLED_APPS are provided in the JavaScript output. Example with default values: from django.views.i18n import JavaScriptCatalog urlpatterns = [ path('jsi18n/', JavaScriptCatalog.as_view(), name='javascript-catalog'), ] Example with custom packages: urlpatterns = [ path('jsi18n/myapp/', JavaScriptCatalog.as_view(packages=['your.app.label']), name='javascript-catalog'), ] If your root URLconf uses i18n_patterns(), JavaScriptCatalog must also be wrapped by i18n_patterns() for the catalog to be correctly generated. Example with i18n_patterns(): from django.conf.urls.i18n import i18n_patterns urlpatterns = i18n_patterns( path('jsi18n/', JavaScriptCatalog.as_view(), name='javascript-catalog'), )
django.topics.i18n.translation#django.views.i18n.JavaScriptCatalog
domain Translation domain containing strings to add in the view output. Defaults to 'djangojs'.
django.topics.i18n.translation#django.views.i18n.JavaScriptCatalog.domain
packages A list of application names among installed applications. Those apps should contain a locale directory. All those catalogs plus all catalogs found in LOCALE_PATHS (which are always included) are merged into one catalog. Defaults to None, which means that all available translations from all INSTALLED_APPS are provided in the JavaScript output.
django.topics.i18n.translation#django.views.i18n.JavaScriptCatalog.packages
class JSONCatalog In order to use another client-side library to handle translations, you may want to take advantage of the JSONCatalog view. It’s similar to JavaScriptCatalog but returns a JSON response. See the documentation for JavaScriptCatalog to learn about possible values and use of the domain and packages attributes. The response format is as follows: { "catalog": { # Translations catalog }, "formats": { # Language formats for date, time, etc. }, "plural": "..." # Expression for plural forms, or null. }
django.topics.i18n.translation#django.views.i18n.JSONCatalog
set_language(request)
django.topics.i18n.translation#django.views.i18n.set_language
static.serve(request, path, document_root, show_indexes=False)
django.ref.views#django.views.static.serve
Widgets A widget is Django’s representation of an HTML input element. The widget handles the rendering of the HTML, and the extraction of data from a GET/POST dictionary that corresponds to the widget. The HTML generated by the built-in widgets uses HTML5 syntax, targeting <!DOCTYPE html>. For example, it uses boolean attributes such as checked rather than the XHTML style of checked='checked'. Tip Widgets should not be confused with the form fields. Form fields deal with the logic of input validation and are used directly in templates. Widgets deal with rendering of HTML form input elements on the web page and extraction of raw submitted data. However, widgets do need to be assigned to form fields. Specifying widgets Whenever you specify a field on a form, Django will use a default widget that is appropriate to the type of data that is to be displayed. To find which widget is used on which field, see the documentation about Built-in Field classes. However, if you want to use a different widget for a field, you can use the widget argument on the field definition. For example: from django import forms class CommentForm(forms.Form): name = forms.CharField() url = forms.URLField() comment = forms.CharField(widget=forms.Textarea) This would specify a form with a comment that uses a larger Textarea widget, rather than the default TextInput widget. Setting arguments for widgets Many widgets have optional extra arguments; they can be set when defining the widget on the field. In the following example, the years attribute is set for a SelectDateWidget: from django import forms BIRTH_YEAR_CHOICES = ['1980', '1981', '1982'] FAVORITE_COLORS_CHOICES = [ ('blue', 'Blue'), ('green', 'Green'), ('black', 'Black'), ] class SimpleForm(forms.Form): birth_year = forms.DateField(widget=forms.SelectDateWidget(years=BIRTH_YEAR_CHOICES)) favorite_colors = forms.MultipleChoiceField( required=False, widget=forms.CheckboxSelectMultiple, choices=FAVORITE_COLORS_CHOICES, ) See the Built-in widgets for more information about which widgets are available and which arguments they accept. Widgets inheriting from the Select widget Widgets inheriting from the Select widget deal with choices. They present the user with a list of options to choose from. The different widgets present this choice differently; the Select widget itself uses a <select> HTML list representation, while RadioSelect uses radio buttons. Select widgets are used by default on ChoiceField fields. The choices displayed on the widget are inherited from the ChoiceField and changing ChoiceField.choices will update Select.choices. For example: >>> from django import forms >>> CHOICES = [('1', 'First'), ('2', 'Second')] >>> choice_field = forms.ChoiceField(widget=forms.RadioSelect, choices=CHOICES) >>> choice_field.choices [('1', 'First'), ('2', 'Second')] >>> choice_field.widget.choices [('1', 'First'), ('2', 'Second')] >>> choice_field.widget.choices = [] >>> choice_field.choices = [('1', 'First and only')] >>> choice_field.widget.choices [('1', 'First and only')] Widgets which offer a choices attribute can however be used with fields which are not based on choice – such as a CharField – but it is recommended to use a ChoiceField-based field when the choices are inherent to the model and not just the representational widget. Customizing widget instances When Django renders a widget as HTML, it only renders very minimal markup - Django doesn’t add class names, or any other widget-specific attributes. This means, for example, that all TextInput widgets will appear the same on your web pages. There are two ways to customize widgets: per widget instance and per widget class. Styling widget instances If you want to make one widget instance look different from another, you will need to specify additional attributes at the time when the widget object is instantiated and assigned to a form field (and perhaps add some rules to your CSS files). For example, take the following form: from django import forms class CommentForm(forms.Form): name = forms.CharField() url = forms.URLField() comment = forms.CharField() This form will include three default TextInput widgets, with default rendering – no CSS class, no extra attributes. This means that the input boxes provided for each widget will be rendered exactly the same: >>> f = CommentForm(auto_id=False) >>> f.as_table() <tr><th>Name:</th><td><input type="text" name="name" required></td></tr> <tr><th>Url:</th><td><input type="url" name="url" required></td></tr> <tr><th>Comment:</th><td><input type="text" name="comment" required></td></tr> On a real web page, you probably don’t want every widget to look the same. You might want a larger input element for the comment, and you might want the ‘name’ widget to have some special CSS class. It is also possible to specify the ‘type’ attribute to take advantage of the new HTML5 input types. To do this, you use the Widget.attrs argument when creating the widget: class CommentForm(forms.Form): name = forms.CharField(widget=forms.TextInput(attrs={'class': 'special'})) url = forms.URLField() comment = forms.CharField(widget=forms.TextInput(attrs={'size': '40'})) You can also modify a widget in the form definition: class CommentForm(forms.Form): name = forms.CharField() url = forms.URLField() comment = forms.CharField() name.widget.attrs.update({'class': 'special'}) comment.widget.attrs.update(size='40') Or if the field isn’t declared directly on the form (such as model form fields), you can use the Form.fields attribute: class CommentForm(forms.ModelForm): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.fields['name'].widget.attrs.update({'class': 'special'}) self.fields['comment'].widget.attrs.update(size='40') Django will then include the extra attributes in the rendered output: >>> f = CommentForm(auto_id=False) >>> f.as_table() <tr><th>Name:</th><td><input type="text" name="name" class="special" required></td></tr> <tr><th>Url:</th><td><input type="url" name="url" required></td></tr> <tr><th>Comment:</th><td><input type="text" name="comment" size="40" required></td></tr> You can also set the HTML id using attrs. See BoundField.id_for_label for an example. Styling widget classes With widgets, it is possible to add assets (css and javascript) and more deeply customize their appearance and behavior. In a nutshell, you will need to subclass the widget and either define a “Media” inner class or create a “media” property. These methods involve somewhat advanced Python programming and are described in detail in the Form Assets topic guide. Base widget classes Base widget classes Widget and MultiWidget are subclassed by all the built-in widgets and may serve as a foundation for custom widgets. Widget class Widget(attrs=None) This abstract class cannot be rendered, but provides the basic attribute attrs. You may also implement or override the render() method on custom widgets. attrs A dictionary containing HTML attributes to be set on the rendered widget. >>> from django import forms >>> name = forms.TextInput(attrs={'size': 10, 'title': 'Your name'}) >>> name.render('name', 'A name') '<input title="Your name" type="text" name="name" value="A name" size="10">' If you assign a value of True or False to an attribute, it will be rendered as an HTML5 boolean attribute: >>> name = forms.TextInput(attrs={'required': True}) >>> name.render('name', 'A name') '<input name="name" type="text" value="A name" required>' >>> >>> name = forms.TextInput(attrs={'required': False}) >>> name.render('name', 'A name') '<input name="name" type="text" value="A name">' supports_microseconds An attribute that defaults to True. If set to False, the microseconds part of datetime and time values will be set to 0. format_value(value) Cleans and returns a value for use in the widget template. value isn’t guaranteed to be valid input, therefore subclass implementations should program defensively. get_context(name, value, attrs) Returns a dictionary of values to use when rendering the widget template. By default, the dictionary contains a single key, 'widget', which is a dictionary representation of the widget containing the following keys: 'name': The name of the field from the name argument. 'is_hidden': A boolean indicating whether or not this widget is hidden. 'required': A boolean indicating whether or not the field for this widget is required. 'value': The value as returned by format_value(). 'attrs': HTML attributes to be set on the rendered widget. The combination of the attrs attribute and the attrs argument. 'template_name': The value of self.template_name. Widget subclasses can provide custom context values by overriding this method. id_for_label(id_) Returns the HTML ID attribute of this widget for use by a <label>, given the ID of the field. Returns None if an ID isn’t available. This hook is necessary because some widgets have multiple HTML elements and, thus, multiple IDs. In that case, this method should return an ID value that corresponds to the first ID in the widget’s tags. render(name, value, attrs=None, renderer=None) Renders a widget to HTML using the given renderer. If renderer is None, the renderer from the FORM_RENDERER setting is used. value_from_datadict(data, files, name) Given a dictionary of data and this widget’s name, returns the value of this widget. files may contain data coming from request.FILES. Returns None if a value wasn’t provided. Note also that value_from_datadict may be called more than once during handling of form data, so if you customize it and add expensive processing, you should implement some caching mechanism yourself. value_omitted_from_data(data, files, name) Given data and files dictionaries and this widget’s name, returns whether or not there’s data or files for the widget. The method’s result affects whether or not a field in a model form falls back to its default. Special cases are CheckboxInput, CheckboxSelectMultiple, and SelectMultiple, which always return False because an unchecked checkbox and unselected <select multiple> don’t appear in the data of an HTML form submission, so it’s unknown whether or not the user submitted a value. use_required_attribute(initial) Given a form field’s initial value, returns whether or not the widget can be rendered with the required HTML attribute. Forms use this method along with Field.required and Form.use_required_attribute to determine whether or not to display the required attribute for each field. By default, returns False for hidden widgets and True otherwise. Special cases are FileInput and ClearableFileInput, which return False when initial is set, and CheckboxSelectMultiple, which always returns False because browser validation would require all checkboxes to be checked instead of at least one. Override this method in custom widgets that aren’t compatible with browser validation. For example, a WSYSIWG text editor widget backed by a hidden textarea element may want to always return False to avoid browser validation on the hidden field. MultiWidget class MultiWidget(widgets, attrs=None) A widget that is composed of multiple widgets. MultiWidget works hand in hand with the MultiValueField. MultiWidget has one required argument: widgets An iterable containing the widgets needed. For example: >>> from django.forms import MultiWidget, TextInput >>> widget = MultiWidget(widgets=[TextInput, TextInput]) >>> widget.render('name', ['john', 'paul']) '<input type="text" name="name_0" value="john"><input type="text" name="name_1" value="paul">' You may provide a dictionary in order to specify custom suffixes for the name attribute on each subwidget. In this case, for each (key, widget) pair, the key will be appended to the name of the widget in order to generate the attribute value. You may provide the empty string ('') for a single key, in order to suppress the suffix for one widget. For example: >>> widget = MultiWidget(widgets={'': TextInput, 'last': TextInput}) >>> widget.render('name', ['john', 'paul']) '<input type="text" name="name" value="john"><input type="text" name="name_last" value="paul">' And one required method: decompress(value) This method takes a single “compressed” value from the field and returns a list of “decompressed” values. The input value can be assumed valid, but not necessarily non-empty. This method must be implemented by the subclass, and since the value may be empty, the implementation must be defensive. The rationale behind “decompression” is that it is necessary to “split” the combined value of the form field into the values for each widget. An example of this is how SplitDateTimeWidget turns a datetime value into a list with date and time split into two separate values: from django.forms import MultiWidget class SplitDateTimeWidget(MultiWidget): # ... def decompress(self, value): if value: return [value.date(), value.time()] return [None, None] Tip Note that MultiValueField has a complementary method compress() with the opposite responsibility - to combine cleaned values of all member fields into one. It provides some custom context: get_context(name, value, attrs) In addition to the 'widget' key described in Widget.get_context(), MultiWidget adds a widget['subwidgets'] key. These can be looped over in the widget template: {% for subwidget in widget.subwidgets %} {% include subwidget.template_name with widget=subwidget %} {% endfor %} Here’s an example widget which subclasses MultiWidget to display a date with the day, month, and year in different select boxes. This widget is intended to be used with a DateField rather than a MultiValueField, thus we have implemented value_from_datadict(): from datetime import date from django import forms class DateSelectorWidget(forms.MultiWidget): def __init__(self, attrs=None): days = [(day, day) for day in range(1, 32)] months = [(month, month) for month in range(1, 13)] years = [(year, year) for year in [2018, 2019, 2020]] widgets = [ forms.Select(attrs=attrs, choices=days), forms.Select(attrs=attrs, choices=months), forms.Select(attrs=attrs, choices=years), ] super().__init__(widgets, attrs) def decompress(self, value): if isinstance(value, date): return [value.day, value.month, value.year] elif isinstance(value, str): year, month, day = value.split('-') return [day, month, year] return [None, None, None] def value_from_datadict(self, data, files, name): day, month, year = super().value_from_datadict(data, files, name) # DateField expects a single string that it can parse into a date. return '{}-{}-{}'.format(year, month, day) The constructor creates several Select widgets in a list. The super() method uses this list to set up the widget. The required method decompress() breaks up a datetime.date value into the day, month, and year values corresponding to each widget. If an invalid date was selected, such as the non-existent 30th February, the DateField passes this method a string instead, so that needs parsing. The final return handles when value is None, meaning we don’t have any defaults for our subwidgets. The default implementation of value_from_datadict() returns a list of values corresponding to each Widget. This is appropriate when using a MultiWidget with a MultiValueField. But since we want to use this widget with a DateField, which takes a single value, we have overridden this method. The implementation here combines the data from the subwidgets into a string in the format that DateField expects. Built-in widgets Django provides a representation of all the basic HTML widgets, plus some commonly used groups of widgets in the django.forms.widgets module, including the input of text, various checkboxes and selectors, uploading files, and handling of multi-valued input. Widgets handling input of text These widgets make use of the HTML elements input and textarea. TextInput class TextInput input_type: 'text' template_name: 'django/forms/widgets/text.html' Renders as: <input type="text" ...> NumberInput class NumberInput input_type: 'number' template_name: 'django/forms/widgets/number.html' Renders as: <input type="number" ...> Beware that not all browsers support entering localized numbers in number input types. Django itself avoids using them for fields having their localize property set to True. EmailInput class EmailInput input_type: 'email' template_name: 'django/forms/widgets/email.html' Renders as: <input type="email" ...> URLInput class URLInput input_type: 'url' template_name: 'django/forms/widgets/url.html' Renders as: <input type="url" ...> PasswordInput class PasswordInput input_type: 'password' template_name: 'django/forms/widgets/password.html' Renders as: <input type="password" ...> Takes one optional argument: render_value Determines whether the widget will have a value filled in when the form is re-displayed after a validation error (default is False). HiddenInput class HiddenInput input_type: 'hidden' template_name: 'django/forms/widgets/hidden.html' Renders as: <input type="hidden" ...> Note that there also is a MultipleHiddenInput widget that encapsulates a set of hidden input elements. DateInput class DateInput input_type: 'text' template_name: 'django/forms/widgets/date.html' Renders as: <input type="text" ...> Takes same arguments as TextInput, with one more optional argument: format The format in which this field’s initial value will be displayed. If no format argument is provided, the default format is the first format found in DATE_INPUT_FORMATS and respects Format localization. DateTimeInput class DateTimeInput input_type: 'text' template_name: 'django/forms/widgets/datetime.html' Renders as: <input type="text" ...> Takes same arguments as TextInput, with one more optional argument: format The format in which this field’s initial value will be displayed. If no format argument is provided, the default format is the first format found in DATETIME_INPUT_FORMATS and respects Format localization. By default, the microseconds part of the time value is always set to 0. If microseconds are required, use a subclass with the supports_microseconds attribute set to True. TimeInput class TimeInput input_type: 'text' template_name: 'django/forms/widgets/time.html' Renders as: <input type="text" ...> Takes same arguments as TextInput, with one more optional argument: format The format in which this field’s initial value will be displayed. If no format argument is provided, the default format is the first format found in TIME_INPUT_FORMATS and respects Format localization. For the treatment of microseconds, see DateTimeInput. Textarea class Textarea template_name: 'django/forms/widgets/textarea.html' Renders as: <textarea>...</textarea> Selector and checkbox widgets These widgets make use of the HTML elements <select>, <input type="checkbox">, and <input type="radio">. Widgets that render multiple choices have an option_template_name attribute that specifies the template used to render each choice. For example, for the Select widget, select_option.html renders the <option> for a <select>. CheckboxInput class CheckboxInput input_type: 'checkbox' template_name: 'django/forms/widgets/checkbox.html' Renders as: <input type="checkbox" ...> Takes one optional argument: check_test A callable that takes the value of the CheckboxInput and returns True if the checkbox should be checked for that value. Select class Select template_name: 'django/forms/widgets/select.html' option_template_name: 'django/forms/widgets/select_option.html' Renders as: <select><option ...>...</select> choices This attribute is optional when the form field does not have a choices attribute. If it does, it will override anything you set here when the attribute is updated on the Field. NullBooleanSelect class NullBooleanSelect template_name: 'django/forms/widgets/select.html' option_template_name: 'django/forms/widgets/select_option.html' Select widget with options ‘Unknown’, ‘Yes’ and ‘No’ SelectMultiple class SelectMultiple template_name: 'django/forms/widgets/select.html' option_template_name: 'django/forms/widgets/select_option.html' Similar to Select, but allows multiple selection: <select multiple>...</select> RadioSelect class RadioSelect template_name: 'django/forms/widgets/radio.html' option_template_name: 'django/forms/widgets/radio_option.html' Similar to Select, but rendered as a list of radio buttons within <div> tags: <div> <div><input type="radio" name="..."></div> ... </div> Changed in Django 4.0: So they are announced more concisely by screen readers, radio buttons were changed to render in <div> tags. For more granular control over the generated markup, you can loop over the radio buttons in the template. Assuming a form myform with a field beatles that uses a RadioSelect as its widget: <fieldset> <legend>{{ myform.beatles.label }}</legend> {% for radio in myform.beatles %} <div class="myradio"> {{ radio }} </div> {% endfor %} </fieldset> This would generate the following HTML: <fieldset> <legend>Radio buttons</legend> <div class="myradio"> <label for="id_beatles_0"><input id="id_beatles_0" name="beatles" type="radio" value="john" required> John</label> </div> <div class="myradio"> <label for="id_beatles_1"><input id="id_beatles_1" name="beatles" type="radio" value="paul" required> Paul</label> </div> <div class="myradio"> <label for="id_beatles_2"><input id="id_beatles_2" name="beatles" type="radio" value="george" required> George</label> </div> <div class="myradio"> <label for="id_beatles_3"><input id="id_beatles_3" name="beatles" type="radio" value="ringo" required> Ringo</label> </div> </fieldset> That included the <label> tags. To get more granular, you can use each radio button’s tag, choice_label and id_for_label attributes. For example, this template… <fieldset> <legend>{{ myform.beatles.label }}</legend> {% for radio in myform.beatles %} <label for="{{ radio.id_for_label }}"> {{ radio.choice_label }} <span class="radio">{{ radio.tag }}</span> </label> {% endfor %} </fieldset> …will result in the following HTML: <fieldset> <legend>Radio buttons</legend> <label for="id_beatles_0"> John <span class="radio"><input id="id_beatles_0" name="beatles" type="radio" value="john" required></span> </label> <label for="id_beatles_1"> Paul <span class="radio"><input id="id_beatles_1" name="beatles" type="radio" value="paul" required></span> </label> <label for="id_beatles_2"> George <span class="radio"><input id="id_beatles_2" name="beatles" type="radio" value="george" required></span> </label> <label for="id_beatles_3"> Ringo <span class="radio"><input id="id_beatles_3" name="beatles" type="radio" value="ringo" required></span> </label> </fieldset> If you decide not to loop over the radio buttons – e.g., if your template includes {{ myform.beatles }} – they’ll be output in a <div> with <div> tags, as above. The outer <div> container receives the id attribute of the widget, if defined, or BoundField.auto_id otherwise. When looping over the radio buttons, the label and input tags include for and id attributes, respectively. Each radio button has an id_for_label attribute to output the element’s ID. CheckboxSelectMultiple class CheckboxSelectMultiple template_name: 'django/forms/widgets/checkbox_select.html' option_template_name: 'django/forms/widgets/checkbox_option.html' Similar to SelectMultiple, but rendered as a list of checkboxes: <div> <div><input type="checkbox" name="..." ></div> ... </div> The outer <div> container receives the id attribute of the widget, if defined, or BoundField.auto_id otherwise. Changed in Django 4.0: So they are announced more concisely by screen readers, checkboxes were changed to render in <div> tags. Like RadioSelect, you can loop over the individual checkboxes for the widget’s choices. Unlike RadioSelect, the checkboxes won’t include the required HTML attribute if the field is required because browser validation would require all checkboxes to be checked instead of at least one. When looping over the checkboxes, the label and input tags include for and id attributes, respectively. Each checkbox has an id_for_label attribute to output the element’s ID. File upload widgets FileInput class FileInput template_name: 'django/forms/widgets/file.html' Renders as: <input type="file" ...> ClearableFileInput class ClearableFileInput template_name: 'django/forms/widgets/clearable_file_input.html' Renders as: <input type="file" ...> with an additional checkbox input to clear the field’s value, if the field is not required and has initial data. Composite widgets MultipleHiddenInput class MultipleHiddenInput template_name: 'django/forms/widgets/multiple_hidden.html' Renders as: multiple <input type="hidden" ...> tags A widget that handles multiple hidden widgets for fields that have a list of values. SplitDateTimeWidget class SplitDateTimeWidget template_name: 'django/forms/widgets/splitdatetime.html' Wrapper (using MultiWidget) around two widgets: DateInput for the date, and TimeInput for the time. Must be used with SplitDateTimeField rather than DateTimeField. SplitDateTimeWidget has several optional arguments: date_format Similar to DateInput.format time_format Similar to TimeInput.format date_attrs time_attrs Similar to Widget.attrs. A dictionary containing HTML attributes to be set on the rendered DateInput and TimeInput widgets, respectively. If these attributes aren’t set, Widget.attrs is used instead. SplitHiddenDateTimeWidget class SplitHiddenDateTimeWidget template_name: 'django/forms/widgets/splithiddendatetime.html' Similar to SplitDateTimeWidget, but uses HiddenInput for both date and time. SelectDateWidget class SelectDateWidget template_name: 'django/forms/widgets/select_date.html' Wrapper around three Select widgets: one each for month, day, and year. Takes several optional arguments: years An optional list/tuple of years to use in the “year” select box. The default is a list containing the current year and the next 9 years. months An optional dict of months to use in the “months” select box. The keys of the dict correspond to the month number (1-indexed) and the values are the displayed months: MONTHS = { 1:_('jan'), 2:_('feb'), 3:_('mar'), 4:_('apr'), 5:_('may'), 6:_('jun'), 7:_('jul'), 8:_('aug'), 9:_('sep'), 10:_('oct'), 11:_('nov'), 12:_('dec') } empty_label If the DateField is not required, SelectDateWidget will have an empty choice at the top of the list (which is --- by default). You can change the text of this label with the empty_label attribute. empty_label can be a string, list, or tuple. When a string is used, all select boxes will each have an empty choice with this label. If empty_label is a list or tuple of 3 string elements, the select boxes will have their own custom label. The labels should be in this order ('year_label', 'month_label', 'day_label'). # A custom empty label with string field1 = forms.DateField(widget=SelectDateWidget(empty_label="Nothing")) # A custom empty label with tuple field1 = forms.DateField( widget=SelectDateWidget( empty_label=("Choose Year", "Choose Month", "Choose Day"), ), )
django.ref.forms.widgets
Glossary (n,) A parenthesized number followed by a comma denotes a tuple with one element. The trailing comma distinguishes a one-element tuple from a parenthesized n. -1 In a dimension entry, instructs NumPy to choose the length that will keep the total number of array elements the same. >>> np.arange(12).reshape(4, -1).shape (4, 3) In an index, any negative value denotes indexing from the right. … An Ellipsis. When indexing an array, shorthand that the missing axes, if they exist, are full slices. >>> a = np.arange(24).reshape(2,3,4) >>> a[...].shape (2, 3, 4) >>> a[...,0].shape (2, 3) >>> a[0,...].shape (3, 4) >>> a[0,...,0].shape (3,) It can be used at most once; a[...,0,...] raises an IndexError. In printouts, NumPy substitutes ... for the middle elements of large arrays. To see the entire array, use numpy.printoptions : The Python slice operator. In ndarrays, slicing can be applied to every axis: >>> a = np.arange(24).reshape(2,3,4) >>> a array([[[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]], [[12, 13, 14, 15], [16, 17, 18, 19], [20, 21, 22, 23]]]) >>> a[1:,-2:,:-1] array([[[16, 17, 18], [20, 21, 22]]]) Trailing slices can be omitted: >>> a[1] == a[1,:,:] array([[ True, True, True, True], [ True, True, True, True], [ True, True, True, True]]) In contrast to Python, where slicing creates a copy, in NumPy slicing creates a view. For details, see Combining advanced and basic indexing. < In a dtype declaration, indicates that the data is little-endian (the bracket is big on the right). >>> dt = np.dtype('<f') # little-endian single-precision float > In a dtype declaration, indicates that the data is big-endian (the bracket is big on the left). >>> dt = np.dtype('>H') # big-endian unsigned short advanced indexing Rather than using a scalar or slice as an index, an axis can be indexed with an array, providing fine-grained selection. This is known as advanced indexing or “fancy indexing”. along an axis An operation along axis n of array a behaves as if its argument were an array of slices of a where each slice has a successive index of axis n. For example, if a is a 3 x N array, an operation along axis 0 behaves as if its argument were an array containing slices of each row: >>> np.array((a[0,:], a[1,:], a[2,:])) To make it concrete, we can pick the operation to be the array-reversal function numpy.flip, which accepts an axis argument. We construct a 3 x 4 array a: >>> a = np.arange(12).reshape(3,4) >>> a array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]) Reversing along axis 0 (the row axis) yields >>> np.flip(a,axis=0) array([[ 8, 9, 10, 11], [ 4, 5, 6, 7], [ 0, 1, 2, 3]]) Recalling the definition of along an axis, flip along axis 0 is treating its argument as if it were >>> np.array((a[0,:], a[1,:], a[2,:])) array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]) and the result of np.flip(a,axis=0) is to reverse the slices: >>> np.array((a[2,:],a[1,:],a[0,:])) array([[ 8, 9, 10, 11], [ 4, 5, 6, 7], [ 0, 1, 2, 3]]) array Used synonymously in the NumPy docs with ndarray. array_like Any scalar or sequence that can be interpreted as an ndarray. In addition to ndarrays and scalars this category includes lists (possibly nested and with different element types) and tuples. Any argument accepted by numpy.array is array_like. >>> a = np.array([[1, 2.0], [0, 0], (1+1j, 3.)]) >>> a array([[1.+0.j, 2.+0.j], [0.+0.j, 0.+0.j], [1.+1.j, 3.+0.j]]) array scalar An array scalar is an instance of the types/classes float32, float64, etc.. For uniformity in handling operands, NumPy treats a scalar as an array of zero dimension. In contrast, a 0-dimensional array is an ndarray instance containing precisely one value. axis Another term for an array dimension. Axes are numbered left to right; axis 0 is the first element in the shape tuple. In a two-dimensional vector, the elements of axis 0 are rows and the elements of axis 1 are columns. In higher dimensions, the picture changes. NumPy prints higher-dimensional vectors as replications of row-by-column building blocks, as in this three-dimensional vector: >>> a = np.arange(12).reshape(2,2,3) >>> a array([[[ 0, 1, 2], [ 3, 4, 5]], [[ 6, 7, 8], [ 9, 10, 11]]]) a is depicted as a two-element array whose elements are 2x3 vectors. From this point of view, rows and columns are the final two axes, respectively, in any shape. This rule helps you anticipate how a vector will be printed, and conversely how to find the index of any of the printed elements. For instance, in the example, the last two values of 8’s index must be 0 and 2. Since 8 appears in the second of the two 2x3’s, the first index must be 1: >>> a[1,0,2] 8 A convenient way to count dimensions in a printed vector is to count [ symbols after the open-parenthesis. This is useful in distinguishing, say, a (1,2,3) shape from a (2,3) shape: >>> a = np.arange(6).reshape(2,3) >>> a.ndim 2 >>> a array([[0, 1, 2], [3, 4, 5]]) >>> a = np.arange(6).reshape(1,2,3) >>> a.ndim 3 >>> a array([[[0, 1, 2], [3, 4, 5]]]) .base If an array does not own its memory, then its base attribute returns the object whose memory the array is referencing. That object may be referencing the memory from still another object, so the owning object may be a.base.base.base.... Some writers erroneously claim that testing base determines if arrays are views. For the correct way, see numpy.shares_memory. big-endian See Endianness. BLAS Basic Linear Algebra Subprograms broadcast broadcasting is NumPy’s ability to process ndarrays of different sizes as if all were the same size. It permits an elegant do-what-I-mean behavior where, for instance, adding a scalar to a vector adds the scalar value to every element. >>> a = np.arange(3) >>> a array([0, 1, 2]) >>> a + [3, 3, 3] array([3, 4, 5]) >>> a + 3 array([3, 4, 5]) Ordinarly, vector operands must all be the same size, because NumPy works element by element – for instance, c = a * b is c[0,0,0] = a[0,0,0] * b[0,0,0] c[0,0,1] = a[0,0,1] * b[0,0,1] ... But in certain useful cases, NumPy can duplicate data along “missing” axes or “too-short” dimensions so shapes will match. The duplication costs no memory or time. For details, see Broadcasting. C order Same as row-major. column-major See Row- and column-major order. contiguous An array is contiguous if it occupies an unbroken block of memory, and array elements with higher indexes occupy higher addresses (that is, no stride is negative). copy See view. dimension See axis. dtype The datatype describing the (identically typed) elements in an ndarray. It can be changed to reinterpret the array contents. For details, see Data type objects (dtype). fancy indexing Another term for advanced indexing. field In a structured data type, each subtype is called a field. The field has a name (a string), a type (any valid dtype), and an optional title. See Data type objects (dtype). Fortran order Same as column-major. flattened See ravel. homogeneous All elements of a homogeneous array have the same type. ndarrays, in contrast to Python lists, are homogeneous. The type can be complicated, as in a structured array, but all elements have that type. NumPy object arrays, which contain references to Python objects, fill the role of heterogeneous arrays. itemsize The size of the dtype element in bytes. little-endian See Endianness. mask A boolean array used to select only certain elements for an operation: >>> x = np.arange(5) >>> x array([0, 1, 2, 3, 4]) >>> mask = (x > 2) >>> mask array([False, False, False, True, True]) >>> x[mask] = -1 >>> x array([ 0, 1, 2, -1, -1]) masked array Bad or missing data can be cleanly ignored by putting it in a masked array, which has an internal boolean array indicating invalid entries. Operations with masked arrays ignore these entries. >>> a = np.ma.masked_array([np.nan, 2, np.nan], [True, False, True]) >>> a masked_array(data=[--, 2.0, --], mask=[ True, False, True], fill_value=1e+20) >>> a + [1, 2, 3] masked_array(data=[--, 4.0, --], mask=[ True, False, True], fill_value=1e+20) For details, see Masked arrays. matrix NumPy’s two-dimensional matrix class should no longer be used; use regular ndarrays. ndarray NumPy’s basic structure. object array An array whose dtype is object; that is, it contains references to Python objects. Indexing the array dereferences the Python objects, so unlike other ndarrays, an object array has the ability to hold heterogeneous objects. ravel numpy.ravel and numpy.flatten both flatten an ndarray. ravel will return a view if possible; flatten always returns a copy. Flattening collapses a multimdimensional array to a single dimension; details of how this is done (for instance, whether a[n+1] should be the next row or next column) are parameters. record array A structured array with allowing access in an attribute style (a.field) in addition to a['field']. For details, see numpy.recarray. row-major See Row- and column-major order. NumPy creates arrays in row-major order by default. scalar In NumPy, usually a synonym for array scalar. shape A tuple showing the length of each dimension of an ndarray. The length of the tuple itself is the number of dimensions (numpy.ndim). The product of the tuple elements is the number of elements in the array. For details, see numpy.ndarray.shape. stride Physical memory is one-dimensional; strides provide a mechanism to map a given index to an address in memory. For an N-dimensional array, its strides attribute is an N-element tuple; advancing from index i to index i+1 on axis n means adding a.strides[n] bytes to the address. Strides are computed automatically from an array’s dtype and shape, but can be directly specified using as_strided. For details, see numpy.ndarray.strides. To see how striding underlies the power of NumPy views, see The NumPy array: a structure for efficient numerical computation. structured array Array whose dtype is a structured data type. structured data type Users can create arbitrarily complex dtypes that can include other arrays and dtypes. These composite dtypes are called structured data types. subarray An array nested in a structured data type, as b is here: >>> dt = np.dtype([('a', np.int32), ('b', np.float32, (3,))]) >>> np.zeros(3, dtype=dt) array([(0, [0., 0., 0.]), (0, [0., 0., 0.]), (0, [0., 0., 0.])], dtype=[('a', '<i4'), ('b', '<f4', (3,))]) subarray data type An element of a structured datatype that behaves like an ndarray. title An alias for a field name in a structured datatype. type In NumPy, usually a synonym for dtype. For the more general Python meaning, see here. ufunc NumPy’s fast element-by-element computation (vectorization) gives a choice which function gets applied. The general term for the function is ufunc, short for universal function. NumPy routines have built-in ufuncs, but users can also write their own. vectorization NumPy hands off array processing to C, where looping and computation are much faster than in Python. To exploit this, programmers using NumPy eliminate Python loops in favor of array-to-array operations. vectorization can refer both to the C offloading and to structuring NumPy code to leverage it. view Without touching underlying data, NumPy can make one array appear to change its datatype and shape. An array created this way is a view, and NumPy often exploits the performance gain of using a view versus making a new array. A potential drawback is that writing to a view can alter the original as well. If this is a problem, NumPy instead needs to create a physically distinct array – a copy. Some NumPy routines always return views, some always return copies, some may return one or the other, and for some the choice can be specified. Responsibility for managing views and copies falls to the programmer. numpy.shares_memory will check whether b is a view of a, but an exact answer isn’t always feasible, as the documentation page explains. >>> x = np.arange(5) >>> x array([0, 1, 2, 3, 4]) >>> y = x[::2] >>> y array([0, 2, 4]) >>> x[0] = 3 # changing x changes y as well, since y is a view on x >>> y array([3, 2, 4])
numpy.glossary
Using F2PY F2PY can be used either as a command line tool f2py or as a Python module numpy.f2py. While we try to provide the command line tool as part of the numpy setup, some platforms like Windows make it difficult to reliably put the executables on the PATH. We will refer to f2py in this document but you may have to run it as a module: python -m numpy.f2py If you run f2py with no arguments, and the line numpy Version at the end matches the NumPy version printed from python -m numpy.f2py, then you can use the shorter version. If not, or if you cannot run f2py, you should replace all calls to f2py here with the longer version. Command f2py When used as a command line tool, f2py has three major modes, distinguished by the usage of -c and -h switches: Signature file generation To scan Fortran sources and generate a signature file, use f2py -h <filename.pyf> <options> <fortran files> \ [[ only: <fortran functions> : ] \ [ skip: <fortran functions> : ]]... \ [<fortran files> ...] Note A Fortran source file can contain many routines, and it is often not necessary to allow all routines be usable from Python. In such cases, either specify which routines should be wrapped (in the only: .. : part) or which routines F2PY should ignored (in the skip: .. : part). If <filename.pyf> is specified as stdout then signatures are written to standard output instead of a file. Among other options (see below), the following can be used in this mode: --overwrite-signature Overwrites an existing signature file. Extension module construction To construct an extension module, use f2py -m <modulename> <options> <fortran files> \ [[ only: <fortran functions> : ] \ [ skip: <fortran functions> : ]]... \ [<fortran files> ...] The constructed extension module is saved as <modulename>module.c to the current directory. Here <fortran files> may also contain signature files. Among other options (see below), the following options can be used in this mode: --debug-capi Adds debugging hooks to the extension module. When using this extension module, various diagnostic information about the wrapper is written to the standard output, for example, the values of variables, the steps taken, etc. -include'<includefile>' Add a CPP #include statement to the extension module source. <includefile> should be given in one of the following forms "filename.ext" <filename.ext> The include statement is inserted just before the wrapper functions. This feature enables using arbitrary C functions (defined in <includefile>) in F2PY generated wrappers. Note This option is deprecated. Use usercode statement to specify C code snippets directly in signature files. --[no-]wrap-functions Create Fortran subroutine wrappers to Fortran functions. --wrap-functions is default because it ensures maximum portability and compiler independence. --include-paths <path1>:<path2>:.. Search include files from given directories. --help-link [<list of resources names>] List system resources found by numpy_distutils/system_info.py. For example, try f2py --help-link lapack_opt. Building a module To build an extension module, use f2py -c <options> <fortran files> \ [[ only: <fortran functions> : ] \ [ skip: <fortran functions> : ]]... \ [ <fortran/c source files> ] [ <.o, .a, .so files> ] If <fortran files> contains a signature file, then the source for an extension module is constructed, all Fortran and C sources are compiled, and finally all object and library files are linked to the extension module <modulename>.so which is saved into the current directory. If <fortran files> does not contain a signature file, then an extension module is constructed by scanning all Fortran source codes for routine signatures, before proceeding to build the extension module. Among other options (see below) and options described for previous modes, the following options can be used in this mode: --help-fcompiler List the available Fortran compilers. --help-compiler [depreciated] List the available Fortran compilers. --fcompiler=<Vendor> Specify a Fortran compiler type by vendor. --f77exec=<path> Specify the path to a F77 compiler --fcompiler-exec=<path> [depreciated] Specify the path to a F77 compiler --f90exec=<path> Specify the path to a F90 compiler --f90compiler-exec=<path> [depreciated] Specify the path to a F90 compiler --f77flags=<string> Specify F77 compiler flags --f90flags=<string> Specify F90 compiler flags --opt=<string> Specify optimization flags --arch=<string> Specify architecture specific optimization flags --noopt Compile without optimization flags --noarch Compile without arch-dependent optimization flags --debug Compile with debugging information -l<libname> Use the library <libname> when linking. -D<macro>[=<defn=1>] Define macro <macro> as <defn>. -U<macro> Define macro <macro> -I<dir> Append directory <dir> to the list of directories searched for include files. -L<dir> Add directory <dir> to the list of directories to be searched for -l. link-<resource> Link the extension module with <resource> as defined by numpy_distutils/system_info.py. E.g. to link with optimized LAPACK libraries (vecLib on MacOSX, ATLAS elsewhere), use --link-lapack_opt. See also --help-link switch. Note The f2py -c option must be applied either to an existing .pyf file (plus the source/object/library files) or one must specify the -m <modulename> option (plus the sources/object/library files). Use one of the following options: f2py -c -m fib1 fib1.f or f2py -m fib1 fib1.f -h fib1.pyf f2py -c fib1.pyf fib1.f For more information, see the Building C and C++ Extensions Python documentation for details. When building an extension module, a combination of the following macros may be required for non-gcc Fortran compilers: -DPREPEND_FORTRAN -DNO_APPEND_FORTRAN -DUPPERCASE_FORTRAN To test the performance of F2PY generated interfaces, use -DF2PY_REPORT_ATEXIT. Then a report of various timings is printed out at the exit of Python. This feature may not work on all platforms, currently only Linux platform is supported. To see whether F2PY generated interface performs copies of array arguments, use -DF2PY_REPORT_ON_ARRAY_COPY=<int>. When the size of an array argument is larger than <int>, a message about the coping is sent to stderr. Other options -m <modulename> Name of an extension module. Default is untitled. Warning Don’t use this option if a signature file (*.pyf) is used. --[no-]lower Do [not] lower the cases in <fortran files>. By default, --lower is assumed with -h switch, and --no-lower without the -h switch. --build-dir <dirname> All F2PY generated files are created in <dirname>. Default is tempfile.mkdtemp(). --quiet Run quietly. --verbose Run with extra verbosity. -v Print the F2PY version and exit. Execute f2py without any options to get an up-to-date list of available options. Python module numpy.f2py Warning The current Python interface to the f2py module is not mature and may change in the future. Fortran to Python Interface Generator. numpy.f2py.compile(source, modulename='untitled', extra_args='', verbose=True, source_fn=None, extension='.f', full_output=False)[source] Build extension module from a Fortran 77 source string with f2py. Parameters sourcestr or bytes Fortran source of module / subroutine to compile Changed in version 1.16.0: Accept str as well as bytes modulenamestr, optional The name of the compiled python module extra_argsstr or list, optional Additional parameters passed to f2py Changed in version 1.16.0: A list of args may also be provided. verbosebool, optional Print f2py output to screen source_fnstr, optional Name of the file where the fortran source is written. The default is to use a temporary file with the extension provided by the extension parameter extension{‘.f’, ‘.f90’}, optional Filename extension if source_fn is not provided. The extension tells which fortran standard is used. The default is f, which implies F77 standard. New in version 1.11.0. full_outputbool, optional If True, return a subprocess.CompletedProcess containing the stdout and stderr of the compile process, instead of just the status code. New in version 1.20.0. Returns resultint or subprocess.CompletedProcess 0 on success, or a subprocess.CompletedProcess if full_output=True Examples >>> import numpy.f2py >>> fsource = ''' ... subroutine foo ... print*, "Hello world!" ... end ... ''' >>> numpy.f2py.compile(fsource, modulename='hello', verbose=0) 0 >>> import hello >>> hello.foo() Hello world! numpy.f2py.get_include()[source] Return the directory that contains the fortranobject.c and .h files. Note This function is not needed when building an extension with numpy.distutils directly from .f and/or .pyf files in one go. Python extension modules built with f2py-generated code need to use fortranobject.c as a source file, and include the fortranobject.h header. This function can be used to obtain the directory containing both of these files. Returns include_pathstr Absolute path to the directory containing fortranobject.c and fortranobject.h. See also numpy.get_include function that returns the numpy include directory Notes New in version 1.22.0. Unless the build system you are using has specific support for f2py, building a Python extension using a .pyf signature file is a two-step process. For a module mymod: Step 1: run python -m numpy.f2py mymod.pyf --quiet. This generates _mymodmodule.c and (if needed) _fblas-f2pywrappers.f files next to mymod.pyf. Step 2: build your Python extension module. This requires the following source files: _mymodmodule.c _mymod-f2pywrappers.f (if it was generated in step 1) fortranobject.c numpy.f2py.run_main(comline_list)[source] Equivalent to running: f2py <args> where <args>=string.join(<list>,' '), but in Python. Unless -h is used, this function returns a dictionary containing information on generated modules and their dependencies on source files. For example, the command f2py -m scalar scalar.f can be executed from Python as follows You cannot build extension modules with this function, that is, using -c is not allowed. Use compile command instead Examples >>> import numpy.f2py >>> r = numpy.f2py.run_main(['-m','scalar','doc/source/f2py/scalar.f']) Reading fortran codes... Reading file 'doc/source/f2py/scalar.f' (format:fix,strict) Post-processing... Block: scalar Block: FOO Building modules... Building module "scalar"... Wrote C/API module "scalar" to file "./scalarmodule.c" >>> print(r) {'scalar': {'h': ['/home/users/pearu/src_cvs/f2py/src/fortranobject.h'], 'csrc': ['./scalarmodule.c', '/home/users/pearu/src_cvs/f2py/src/fortranobject.c']}}
numpy.f2py.usage
Scalars Python defines only one type of a particular data class (there is only one integer type, one floating-point type, etc.). This can be convenient in applications that don’t need to be concerned with all the ways data can be represented in a computer. For scientific computing, however, more control is often needed. In NumPy, there are 24 new fundamental Python types to describe different types of scalars. These type descriptors are mostly based on the types available in the C language that CPython is written in, with several additional types compatible with Python’s types. Array scalars have the same attributes and methods as ndarrays. 1 This allows one to treat items of an array partly on the same footing as arrays, smoothing out rough edges that result when mixing scalar and array operations. Array scalars live in a hierarchy (see the Figure below) of data types. They can be detected using the hierarchy: For example, isinstance(val, np.generic) will return True if val is an array scalar object. Alternatively, what kind of array scalar is present can be determined using other members of the data type hierarchy. Thus, for example isinstance(val, np.complexfloating) will return True if val is a complex valued type, while isinstance(val, np.flexible) will return true if val is one of the flexible itemsize array types (str_, bytes_, void). Figure: Hierarchy of type objects representing the array data types. Not shown are the two integer types intp and uintp which just point to the integer type that holds a pointer for the platform. All the number types can be obtained using bit-width names as well. 1 However, array scalars are immutable, so none of the array scalar attributes are settable. Built-in scalar types The built-in scalar types are shown below. The C-like names are associated with character codes, which are shown in their descriptions. Use of the character codes, however, is discouraged. Some of the scalar types are essentially equivalent to fundamental Python types and therefore inherit from them as well as from the generic array scalar type: Array scalar type Related Python type Inherits? int_ int Python 2 only float_ float yes complex_ complex yes bytes_ bytes yes str_ str yes bool_ bool no datetime64 datetime.datetime no timedelta64 datetime.timedelta no The bool_ data type is very similar to the Python bool but does not inherit from it because Python’s bool does not allow itself to be inherited from, and on the C-level the size of the actual bool data is not the same as a Python Boolean scalar. Warning The int_ type does not inherit from the int built-in under Python 3, because type int is no longer a fixed-width integer type. Tip The default data type in NumPy is float_. class numpy.generic[source] Base class for numpy scalar types. Class from which most (all?) numpy scalar types are derived. For consistency, exposes the same API as ndarray, despite many consequent attributes being either “get-only,” or completely irrelevant. This is the class from which it is strongly suggested users should derive custom scalar types. class numpy.number[source] Abstract base class of all numeric scalar types. Integer types class numpy.integer[source] Abstract base class of all integer scalar types. Note The numpy integer types mirror the behavior of C integers, and can therefore be subject to Overflow Errors. Signed integer types class numpy.signedinteger[source] Abstract base class of all signed integer scalar types. class numpy.byte[source] Signed integer type, compatible with C char. Character code 'b' Alias on this platform (Linux x86_64) numpy.int8: 8-bit signed integer (-128 to 127). class numpy.short[source] Signed integer type, compatible with C short. Character code 'h' Alias on this platform (Linux x86_64) numpy.int16: 16-bit signed integer (-32_768 to 32_767). class numpy.intc[source] Signed integer type, compatible with C int. Character code 'i' Alias on this platform (Linux x86_64) numpy.int32: 32-bit signed integer (-2_147_483_648 to 2_147_483_647). class numpy.int_[source] Signed integer type, compatible with Python int and C long. Character code 'l' Alias on this platform (Linux x86_64) numpy.int64: 64-bit signed integer (-9_223_372_036_854_775_808 to 9_223_372_036_854_775_807). Alias on this platform (Linux x86_64) numpy.intp: Signed integer large enough to fit pointer, compatible with C intptr_t. class numpy.longlong[source] Signed integer type, compatible with C long long. Character code 'q' Unsigned integer types class numpy.unsignedinteger[source] Abstract base class of all unsigned integer scalar types. class numpy.ubyte[source] Unsigned integer type, compatible with C unsigned char. Character code 'B' Alias on this platform (Linux x86_64) numpy.uint8: 8-bit unsigned integer (0 to 255). class numpy.ushort[source] Unsigned integer type, compatible with C unsigned short. Character code 'H' Alias on this platform (Linux x86_64) numpy.uint16: 16-bit unsigned integer (0 to 65_535). class numpy.uintc[source] Unsigned integer type, compatible with C unsigned int. Character code 'I' Alias on this platform (Linux x86_64) numpy.uint32: 32-bit unsigned integer (0 to 4_294_967_295). class numpy.uint[source] Unsigned integer type, compatible with C unsigned long. Character code 'L' Alias on this platform (Linux x86_64) numpy.uint64: 64-bit unsigned integer (0 to 18_446_744_073_709_551_615). Alias on this platform (Linux x86_64) numpy.uintp: Unsigned integer large enough to fit pointer, compatible with C uintptr_t. class numpy.ulonglong[source] Signed integer type, compatible with C unsigned long long. Character code 'Q' Inexact types class numpy.inexact[source] Abstract base class of all numeric scalar types with a (potentially) inexact representation of the values in its range, such as floating-point numbers. Note Inexact scalars are printed using the fewest decimal digits needed to distinguish their value from other values of the same datatype, by judicious rounding. See the unique parameter of format_float_positional and format_float_scientific. This means that variables with equal binary values but whose datatypes are of different precisions may display differently: >>> f16 = np.float16("0.1") >>> f32 = np.float32(f16) >>> f64 = np.float64(f32) >>> f16 == f32 == f64 True >>> f16, f32, f64 (0.1, 0.099975586, 0.0999755859375) Note that none of these floats hold the exact value \(\frac{1}{10}\); f16 prints as 0.1 because it is as close to that value as possible, whereas the other types do not as they have more precision and therefore have closer values. Conversely, floating-point scalars of different precisions which approximate the same decimal value may compare unequal despite printing identically: >>> f16 = np.float16("0.1") >>> f32 = np.float32("0.1") >>> f64 = np.float64("0.1") >>> f16 == f32 == f64 False >>> f16, f32, f64 (0.1, 0.1, 0.1) Floating-point types class numpy.floating[source] Abstract base class of all floating-point scalar types. class numpy.half[source] Half-precision floating-point number type. Character code 'e' Alias on this platform (Linux x86_64) numpy.float16: 16-bit-precision floating-point number type: sign bit, 5 bits exponent, 10 bits mantissa. class numpy.single[source] Single-precision floating-point number type, compatible with C float. Character code 'f' Alias on this platform (Linux x86_64) numpy.float32: 32-bit-precision floating-point number type: sign bit, 8 bits exponent, 23 bits mantissa. class numpy.double(x=0, /)[source] Double-precision floating-point number type, compatible with Python float and C double. Character code 'd' Alias numpy.float_ Alias on this platform (Linux x86_64) numpy.float64: 64-bit precision floating-point number type: sign bit, 11 bits exponent, 52 bits mantissa. class numpy.longdouble[source] Extended-precision floating-point number type, compatible with C long double but not necessarily with IEEE 754 quadruple-precision. Character code 'g' Alias numpy.longfloat Alias on this platform (Linux x86_64) numpy.float128: 128-bit extended-precision floating-point number type. Complex floating-point types class numpy.complexfloating[source] Abstract base class of all complex number scalar types that are made up of floating-point numbers. class numpy.csingle[source] Complex number type composed of two single-precision floating-point numbers. Character code 'F' Alias numpy.singlecomplex Alias on this platform (Linux x86_64) numpy.complex64: Complex number type composed of 2 32-bit-precision floating-point numbers. class numpy.cdouble(real=0, imag=0)[source] Complex number type composed of two double-precision floating-point numbers, compatible with Python complex. Character code 'D' Alias numpy.cfloat Alias numpy.complex_ Alias on this platform (Linux x86_64) numpy.complex128: Complex number type composed of 2 64-bit-precision floating-point numbers. class numpy.clongdouble[source] Complex number type composed of two extended-precision floating-point numbers. Character code 'G' Alias numpy.clongfloat Alias numpy.longcomplex Alias on this platform (Linux x86_64) numpy.complex256: Complex number type composed of 2 128-bit extended-precision floating-point numbers. Other types class numpy.bool_[source] Boolean type (True or False), stored as a byte. Warning The bool_ type is not a subclass of the int_ type (the bool_ is not even a number type). This is different than Python’s default implementation of bool as a sub-class of int. Character code '?' Alias numpy.bool8 class numpy.datetime64[source] If created from a 64-bit integer, it represents an offset from 1970-01-01T00:00:00. If created from string, the string can be in ISO 8601 date or datetime format. >>> np.datetime64(10, 'Y') numpy.datetime64('1980') >>> np.datetime64('1980', 'Y') numpy.datetime64('1980') >>> np.datetime64(10, 'D') numpy.datetime64('1970-01-11') See Datetimes and Timedeltas for more information. Character code 'M' class numpy.timedelta64[source] A timedelta stored as a 64-bit integer. See Datetimes and Timedeltas for more information. Character code 'm' class numpy.object_[source] Any Python object. Character code 'O' Note The data actually stored in object arrays (i.e., arrays having dtype object_) are references to Python objects, not the objects themselves. Hence, object arrays behave more like usual Python lists, in the sense that their contents need not be of the same Python type. The object type is also special because an array containing object_ items does not return an object_ object on item access, but instead returns the actual object that the array item refers to. The following data types are flexible: they have no predefined size and the data they describe can be of different length in different arrays. (In the character codes # is an integer denoting how many elements the data type consists of.) class numpy.flexible[source] Abstract base class of all scalar types without predefined length. The actual size of these types depends on the specific np.dtype instantiation. class numpy.bytes_[source] A byte string. When used in arrays, this type strips trailing null bytes. Character code 'S' Alias numpy.string_ class numpy.str_[source] A unicode string. When used in arrays, this type strips trailing null codepoints. Unlike the builtin str, this supports the Buffer Protocol, exposing its contents as UCS4: >>> m = memoryview(np.str_("abc")) >>> m.format '3w' >>> m.tobytes() b'a\x00\x00\x00b\x00\x00\x00c\x00\x00\x00' Character code 'U' Alias numpy.unicode_ class numpy.void[source] Either an opaque sequence of bytes, or a structure. >>> np.void(b'abcd') void(b'\x61\x62\x63\x64') Structured void scalars can only be constructed via extraction from Structured arrays: >>> arr = np.array((1, 2), dtype=[('x', np.int8), ('y', np.int8)]) >>> arr[()] (1, 2) # looks like a tuple, but is `np.void` Character code 'V' Warning See Note on string types. Numeric Compatibility: If you used old typecode characters in your Numeric code (which was never recommended), you will need to change some of them to the new characters. In particular, the needed changes are c -> S1, b -> B, 1 -> b, s -> h, w -> H, and u -> I. These changes make the type character convention more consistent with other Python modules such as the struct module. Sized aliases Along with their (mostly) C-derived names, the integer, float, and complex data-types are also available using a bit-width convention so that an array of the right size can always be ensured. Two aliases (numpy.intp and numpy.uintp) pointing to the integer type that is sufficiently large to hold a C pointer are also provided. numpy.bool8[source] alias of numpy.bool_ numpy.int8[source] numpy.int16 numpy.int32 numpy.int64 Aliases for the signed integer types (one of numpy.byte, numpy.short, numpy.intc, numpy.int_ and numpy.longlong) with the specified number of bits. Compatible with the C99 int8_t, int16_t, int32_t, and int64_t, respectively. numpy.uint8[source] numpy.uint16 numpy.uint32 numpy.uint64 Alias for the unsigned integer types (one of numpy.ubyte, numpy.ushort, numpy.uintc, numpy.uint and numpy.ulonglong) with the specified number of bits. Compatible with the C99 uint8_t, uint16_t, uint32_t, and uint64_t, respectively. numpy.intp[source] Alias for the signed integer type (one of numpy.byte, numpy.short, numpy.intc, numpy.int_ and np.longlong) that is the same size as a pointer. Compatible with the C intptr_t. Character code 'p' numpy.uintp[source] Alias for the unsigned integer type (one of numpy.ubyte, numpy.ushort, numpy.uintc, numpy.uint and np.ulonglong) that is the same size as a pointer. Compatible with the C uintptr_t. Character code 'P' numpy.float16[source] alias of numpy.half numpy.float32[source] alias of numpy.single numpy.float64[source] alias of numpy.double numpy.float96 numpy.float128[source] Alias for numpy.longdouble, named after its size in bits. The existence of these aliases depends on the platform. numpy.complex64[source] alias of numpy.csingle numpy.complex128[source] alias of numpy.cdouble numpy.complex192 numpy.complex256[source] Alias for numpy.clongdouble, named after its size in bits. The existence of these aliases depends on the platform. Other aliases The first two of these are conveniences which resemble the names of the builtin types, in the same style as bool_, int_, str_, bytes_, and object_: numpy.float_[source] alias of numpy.double numpy.complex_[source] alias of numpy.cdouble Some more use alternate naming conventions for extended-precision floats and complex numbers: numpy.longfloat[source] alias of numpy.longdouble numpy.singlecomplex[source] alias of numpy.csingle numpy.cfloat[source] alias of numpy.cdouble numpy.longcomplex[source] alias of numpy.clongdouble numpy.clongfloat[source] alias of numpy.clongdouble The following aliases originate from Python 2, and it is recommended that they not be used in new code. numpy.string_[source] alias of numpy.bytes_ numpy.unicode_[source] alias of numpy.str_ Attributes The array scalar objects have an array priority of NPY_SCALAR_PRIORITY (-1,000,000.0). They also do not (yet) have a ctypes attribute. Otherwise, they share the same attributes as arrays: generic.flags The integer value of flags. generic.shape Tuple of array dimensions. generic.strides Tuple of bytes steps in each dimension. generic.ndim The number of array dimensions. generic.data Pointer to start of data. generic.size The number of elements in the gentype. generic.itemsize The length of one element in bytes. generic.base Scalar attribute identical to the corresponding array attribute. generic.dtype Get array data-descriptor. generic.real The real part of the scalar. generic.imag The imaginary part of the scalar. generic.flat A 1-D view of the scalar. generic.T Scalar attribute identical to the corresponding array attribute. generic.__array_interface__ Array protocol: Python side generic.__array_struct__ Array protocol: struct generic.__array_priority__ Array priority. generic.__array_wrap__ sc.__array_wrap__(obj) return scalar from array Indexing See also Indexing routines, Data type objects (dtype) Array scalars can be indexed like 0-dimensional arrays: if x is an array scalar, x[()] returns a copy of array scalar x[...] returns a 0-dimensional ndarray x['field-name'] returns the array scalar in the field field-name. (x can have fields, for example, when it corresponds to a structured data type.) Methods Array scalars have exactly the same methods as arrays. The default behavior of these methods is to internally convert the scalar to an equivalent 0-dimensional array and to call the corresponding array method. In addition, math operations on array scalars are defined so that the same hardware flags are set and used to interpret the results as for ufunc, so that the error state used for ufuncs also carries over to the math on array scalars. The exceptions to the above rules are given below: generic.__array__ sc.__array__(dtype) return 0-dim array from scalar with specified dtype generic.__array_wrap__ sc.__array_wrap__(obj) return scalar from array generic.squeeze Scalar method identical to the corresponding array attribute. generic.byteswap Scalar method identical to the corresponding array attribute. generic.__reduce__ Helper for pickle. generic.__setstate__ generic.setflags Scalar method identical to the corresponding array attribute. Utility method for typing: number.__class_getitem__(item, /) Return a parametrized wrapper around the number type. Defining new types There are two ways to effectively define a new array scalar type (apart from composing structured types dtypes from the built-in scalar types): One way is to simply subclass the ndarray and overwrite the methods of interest. This will work to a degree, but internally certain behaviors are fixed by the data type of the array. To fully customize the data type of an array you need to define a new data-type, and register it with NumPy. Such new types can only be defined in C, using the NumPy C-API.
numpy.reference.arrays.scalars
Parallel Random Number Generation There are three strategies implemented that can be used to produce repeatable pseudo-random numbers across multiple processes (local or distributed). SeedSequence spawning SeedSequence implements an algorithm to process a user-provided seed, typically as an integer of some size, and to convert it into an initial state for a BitGenerator. It uses hashing techniques to ensure that low-quality seeds are turned into high quality initial states (at least, with very high probability). For example, MT19937 has a state consisting of 624 uint32 integers. A naive way to take a 32-bit integer seed would be to just set the last element of the state to the 32-bit seed and leave the rest 0s. This is a valid state for MT19937, but not a good one. The Mersenne Twister algorithm suffers if there are too many 0s. Similarly, two adjacent 32-bit integer seeds (i.e. 12345 and 12346) would produce very similar streams. SeedSequence avoids these problems by using successions of integer hashes with good avalanche properties to ensure that flipping any bit in the input input has about a 50% chance of flipping any bit in the output. Two input seeds that are very close to each other will produce initial states that are very far from each other (with very high probability). It is also constructed in such a way that you can provide arbitrary-sized integers or lists of integers. SeedSequence will take all of the bits that you provide and mix them together to produce however many bits the consuming BitGenerator needs to initialize itself. These properties together mean that we can safely mix together the usual user-provided seed with simple incrementing counters to get BitGenerator states that are (to very high probability) independent of each other. We can wrap this together into an API that is easy to use and difficult to misuse. from numpy.random import SeedSequence, default_rng ss = SeedSequence(12345) # Spawn off 10 child SeedSequences to pass to child processes. child_seeds = ss.spawn(10) streams = [default_rng(s) for s in child_seeds] Child SeedSequence objects can also spawn to make grandchildren, and so on. Each SeedSequence has its position in the tree of spawned SeedSequence objects mixed in with the user-provided seed to generate independent (with very high probability) streams. grandchildren = child_seeds[0].spawn(4) grand_streams = [default_rng(s) for s in grandchildren] This feature lets you make local decisions about when and how to split up streams without coordination between processes. You do not have to preallocate space to avoid overlapping or request streams from a common global service. This general “tree-hashing” scheme is not unique to numpy but not yet widespread. Python has increasingly-flexible mechanisms for parallelization available, and this scheme fits in very well with that kind of use. Using this scheme, an upper bound on the probability of a collision can be estimated if one knows the number of streams that you derive. SeedSequence hashes its inputs, both the seed and the spawn-tree-path, down to a 128-bit pool by default. The probability that there is a collision in that pool, pessimistically-estimated (1), will be about \(n^2*2^{-128}\) where n is the number of streams spawned. If a program uses an aggressive million streams, about \(2^{20}\), then the probability that at least one pair of them are identical is about \(2^{-88}\), which is in solidly-ignorable territory (2). 1 The algorithm is carefully designed to eliminate a number of possible ways to collide. For example, if one only does one level of spawning, it is guaranteed that all states will be unique. But it’s easier to estimate the naive upper bound on a napkin and take comfort knowing that the probability is actually lower. 2 In this calculation, we can mostly ignore the amount of numbers drawn from each stream. See Upgrading PCG64 with PCG64DXSM for the technical details about PCG64. The other PRNGs we provide have some extra protection built in that avoids overlaps if the SeedSequence pools differ in the slightest bit. PCG64DXSM has \(2^{127}\) separate cycles determined by the seed in addition to the position in the \(2^{128}\) long period for each cycle, so one has to both get on or near the same cycle and seed a nearby position in the cycle. Philox has completely independent cycles determined by the seed. SFC64 incorporates a 64-bit counter so every unique seed is at least \(2^{64}\) iterations away from any other seed. And finally, MT19937 has just an unimaginably huge period. Getting a collision internal to SeedSequence is the way a failure would be observed. Independent Streams Philox is a counter-based RNG based which generates values by encrypting an incrementing counter using weak cryptographic primitives. The seed determines the key that is used for the encryption. Unique keys create unique, independent streams. Philox lets you bypass the seeding algorithm to directly set the 128-bit key. Similar, but different, keys will still create independent streams. import secrets from numpy.random import Philox # 128-bit number as a seed root_seed = secrets.getrandbits(128) streams = [Philox(key=root_seed + stream_id) for stream_id in range(10)] This scheme does require that you avoid reusing stream IDs. This may require coordination between the parallel processes. Jumping the BitGenerator state jumped advances the state of the BitGenerator as-if a large number of random numbers have been drawn, and returns a new instance with this state. The specific number of draws varies by BitGenerator, and ranges from \(2^{64}\) to \(2^{128}\). Additionally, the as-if draws also depend on the size of the default random number produced by the specific BitGenerator. The BitGenerators that support jumped, along with the period of the BitGenerator, the size of the jump and the bits in the default unsigned random are listed below. BitGenerator Period Jump Size Bits per Draw MT19937 \(2^{19937}-1\) \(2^{128}\) 32 PCG64 \(2^{128}\) \(~2^{127}\) (3) 64 PCG64DXSM \(2^{128}\) \(~2^{127}\) (3) 64 Philox \(2^{256}\) \(2^{128}\) 64 3(1,2) The jump size is \((\phi-1)*2^{128}\) where \(\phi\) is the golden ratio. As the jumps wrap around the period, the actual distances between neighboring streams will slowly grow smaller than the jump size, but using the golden ratio this way is a classic method of constructing a low-discrepancy sequence that spreads out the states around the period optimally. You will not be able to jump enough to make those distances small enough to overlap in your lifetime. jumped can be used to produce long blocks which should be long enough to not overlap. import secrets from numpy.random import PCG64 seed = secrets.getrandbits(128) blocked_rng = [] rng = PCG64(seed) for i in range(10): blocked_rng.append(rng.jumped(i)) When using jumped, one does have to take care not to jump to a stream that was already used. In the above example, one could not later use blocked_rng[0].jumped() as it would overlap with blocked_rng[1]. Like with the independent streams, if the main process here wants to split off 10 more streams by jumping, then it needs to start with range(10, 20), otherwise it would recreate the same streams. On the other hand, if you carefully construct the streams, then you are guaranteed to have streams that do not overlap.
numpy.reference.random.parallel
Broadcasting See also numpy.broadcast The term broadcasting describes how NumPy treats arrays with different shapes during arithmetic operations. Subject to certain constraints, the smaller array is “broadcast” across the larger array so that they have compatible shapes. Broadcasting provides a means of vectorizing array operations so that looping occurs in C instead of Python. It does this without making needless copies of data and usually leads to efficient algorithm implementations. There are, however, cases where broadcasting is a bad idea because it leads to inefficient use of memory that slows computation. NumPy operations are usually done on pairs of arrays on an element-by-element basis. In the simplest case, the two arrays must have exactly the same shape, as in the following example: >>> a = np.array([1.0, 2.0, 3.0]) >>> b = np.array([2.0, 2.0, 2.0]) >>> a * b array([ 2., 4., 6.]) NumPy’s broadcasting rule relaxes this constraint when the arrays’ shapes meet certain constraints. The simplest broadcasting example occurs when an array and a scalar value are combined in an operation: >>> a = np.array([1.0, 2.0, 3.0]) >>> b = 2.0 >>> a * b array([ 2., 4., 6.]) The result is equivalent to the previous example where b was an array. We can think of the scalar b being stretched during the arithmetic operation into an array with the same shape as a. The new elements in b, as shown in Figure 1, are simply copies of the original scalar. The stretching analogy is only conceptual. NumPy is smart enough to use the original scalar value without actually making copies so that broadcasting operations are as memory and computationally efficient as possible. Figure 1 In the simplest example of broadcasting, the scalar b is stretched to become an array of same shape as a so the shapes are compatible for element-by-element multiplication. The code in the second example is more efficient than that in the first because broadcasting moves less memory around during the multiplication (b is a scalar rather than an array). General Broadcasting Rules When operating on two arrays, NumPy compares their shapes element-wise. It starts with the trailing (i.e. rightmost) dimensions and works its way left. Two dimensions are compatible when they are equal, or one of them is 1 If these conditions are not met, a ValueError: operands could not be broadcast together exception is thrown, indicating that the arrays have incompatible shapes. The size of the resulting array is the size that is not 1 along each axis of the inputs. Arrays do not need to have the same number of dimensions. For example, if you have a 256x256x3 array of RGB values, and you want to scale each color in the image by a different value, you can multiply the image by a one-dimensional array with 3 values. Lining up the sizes of the trailing axes of these arrays according to the broadcast rules, shows that they are compatible: Image (3d array): 256 x 256 x 3 Scale (1d array): 3 Result (3d array): 256 x 256 x 3 When either of the dimensions compared is one, the other is used. In other words, dimensions with size 1 are stretched or “copied” to match the other. In the following example, both the A and B arrays have axes with length one that are expanded to a larger size during the broadcast operation: A (4d array): 8 x 1 x 6 x 1 B (3d array): 7 x 1 x 5 Result (4d array): 8 x 7 x 6 x 5 Broadcastable arrays A set of arrays is called “broadcastable” to the same shape if the above rules produce a valid result. For example, if a.shape is (5,1), b.shape is (1,6), c.shape is (6,) and d.shape is () so that d is a scalar, then a, b, c, and d are all broadcastable to dimension (5,6); and a acts like a (5,6) array where a[:,0] is broadcast to the other columns, b acts like a (5,6) array where b[0,:] is broadcast to the other rows, c acts like a (1,6) array and therefore like a (5,6) array where c[:] is broadcast to every row, and finally, d acts like a (5,6) array where the single value is repeated. Here are some more examples: A (2d array): 5 x 4 B (1d array): 1 Result (2d array): 5 x 4 A (2d array): 5 x 4 B (1d array): 4 Result (2d array): 5 x 4 A (3d array): 15 x 3 x 5 B (3d array): 15 x 1 x 5 Result (3d array): 15 x 3 x 5 A (3d array): 15 x 3 x 5 B (2d array): 3 x 5 Result (3d array): 15 x 3 x 5 A (3d array): 15 x 3 x 5 B (2d array): 3 x 1 Result (3d array): 15 x 3 x 5 Here are examples of shapes that do not broadcast: A (1d array): 3 B (1d array): 4 # trailing dimensions do not match A (2d array): 2 x 1 B (3d array): 8 x 4 x 3 # second from last dimensions mismatched An example of broadcasting when a 1-d array is added to a 2-d array: >>> a = array([[ 0.0, 0.0, 0.0], ... [10.0, 10.0, 10.0], ... [20.0, 20.0, 20.0], ... [30.0, 30.0, 30.0]]) >>> b = array([1.0, 2.0, 3.0]) >>> a + b array([[ 1., 2., 3.], [ 11., 12., 13.], [ 21., 22., 23.], [ 31., 32., 33.]]) >>> b = array([1.0, 2.0, 3.0, 4.0]) >>> a + b Traceback (most recent call last): ValueError: operands could not be broadcast together with shapes (4,3) (4,) As shown in Figure 2, b is added to each row of a. In Figure 3, an exception is raised because of the incompatible shapes. Figure 2 A one dimensional array added to a two dimensional array results in broadcasting if number of 1-d array elements matches the number of 2-d array columns. Figure 3 When the trailing dimensions of the arrays are unequal, broadcasting fails because it is impossible to align the values in the rows of the 1st array with the elements of the 2nd arrays for element-by-element addition. Broadcasting provides a convenient way of taking the outer product (or any other outer operation) of two arrays. The following example shows an outer addition operation of two 1-d arrays: >>> a = np.array([0.0, 10.0, 20.0, 30.0]) >>> b = np.array([1.0, 2.0, 3.0]) >>> a[:, np.newaxis] + b array([[ 1., 2., 3.], [ 11., 12., 13.], [ 21., 22., 23.], [ 31., 32., 33.]]) Figure 4 In some cases, broadcasting stretches both arrays to form an output array larger than either of the initial arrays. Here the newaxis index operator inserts a new axis into a, making it a two-dimensional 4x1 array. Combining the 4x1 array with b, which has shape (3,), yields a 4x3 array. A Practical Example: Vector Quantization Broadcasting comes up quite often in real world problems. A typical example occurs in the vector quantization (VQ) algorithm used in information theory, classification, and other related areas. The basic operation in VQ finds the closest point in a set of points, called codes in VQ jargon, to a given point, called the observation. In the very simple, two-dimensional case shown below, the values in observation describe the weight and height of an athlete to be classified. The codes represent different classes of athletes. 1 Finding the closest point requires calculating the distance between observation and each of the codes. The shortest distance provides the best match. In this example, codes[0] is the closest class indicating that the athlete is likely a basketball player. >>> from numpy import array, argmin, sqrt, sum >>> observation = array([111.0, 188.0]) >>> codes = array([[102.0, 203.0], ... [132.0, 193.0], ... [45.0, 155.0], ... [57.0, 173.0]]) >>> diff = codes - observation # the broadcast happens here >>> dist = sqrt(sum(diff**2,axis=-1)) >>> argmin(dist) 0 In this example, the observation array is stretched to match the shape of the codes array: Observation (1d array): 2 Codes (2d array): 4 x 2 Diff (2d array): 4 x 2 Figure 5 The basic operation of vector quantization calculates the distance between an object to be classified, the dark square, and multiple known codes, the gray circles. In this simple case, the codes represent individual classes. More complex cases use multiple codes per class. Typically, a large number of observations, perhaps read from a database, are compared to a set of codes. Consider this scenario: Observation (2d array): 10 x 3 Codes (2d array): 5 x 3 Diff (3d array): 5 x 10 x 3 The three-dimensional array, diff, is a consequence of broadcasting, not a necessity for the calculation. Large data sets will generate a large intermediate array that is computationally inefficient. Instead, if each observation is calculated individually using a Python loop around the code in the two-dimensional example above, a much smaller array is used. Broadcasting is a powerful tool for writing short and usually intuitive code that does its computations very efficiently in C. However, there are cases when broadcasting uses unnecessarily large amounts of memory for a particular algorithm. In these cases, it is better to write the algorithm’s outer loop in Python. This may also produce more readable code, as algorithms that use broadcasting tend to become more difficult to interpret as the number of dimensions in the broadcast increases. Footnotes 1 In this example, weight has more impact on the distance calculation than height because of the larger values. In practice, it is important to normalize the height and weight, often by their standard deviation across the data set, so that both have equal influence on the distance calculation.
numpy.user.basics.broadcasting
Polynomials Polynomials in NumPy can be created, manipulated, and even fitted using the convenience classes of the numpy.polynomial package, introduced in NumPy 1.4. Prior to NumPy 1.4, numpy.poly1d was the class of choice and it is still available in order to maintain backward compatibility. However, the newer polynomial package is more complete and its convenience classes provide a more consistent, better-behaved interface for working with polynomial expressions. Therefore numpy.polynomial is recommended for new coding. Note Terminology The term polynomial module refers to the old API defined in numpy.lib.polynomial, which includes the numpy.poly1d class and the polynomial functions prefixed with poly accessible from the numpy namespace (e.g. numpy.polyadd, numpy.polyval, numpy.polyfit, etc.). The term polynomial package refers to the new API defined in numpy.polynomial, which includes the convenience classes for the different kinds of polynomials (numpy.polynomial.Polynomial, numpy.polynomial.Chebyshev, etc.). Transitioning from numpy.poly1d to numpy.polynomial As noted above, the poly1d class and associated functions defined in numpy.lib.polynomial, such as numpy.polyfit and numpy.poly, are considered legacy and should not be used in new code. Since NumPy version 1.4, the numpy.polynomial package is preferred for working with polynomials. Quick Reference The following table highlights some of the main differences between the legacy polynomial module and the polynomial package for common tasks. The Polynomial class is imported for brevity: from numpy.polynomial import Polynomial How to… Legacy (numpy.poly1d) numpy.polynomial Create a polynomial object from coefficients 1 p = np.poly1d([1, 2, 3]) p = Polynomial([3, 2, 1]) Create a polynomial object from roots r = np.poly([-1, 1]) p = np.poly1d(r) p = Polynomial.fromroots([-1, 1]) Fit a polynomial of degree deg to data np.polyfit(x, y, deg) Polynomial.fit(x, y, deg) 1 Note the reversed ordering of the coefficients Transition Guide There are significant differences between numpy.lib.polynomial and numpy.polynomial. The most significant difference is the ordering of the coefficients for the polynomial expressions. The various routines in numpy.polynomial all deal with series whose coefficients go from degree zero upward, which is the reverse order of the poly1d convention. The easy way to remember this is that indices correspond to degree, i.e., coef[i] is the coefficient of the term of degree i. Though the difference in convention may be confusing, it is straightforward to convert from the legacy polynomial API to the new. For example, the following demonstrates how you would convert a numpy.poly1d instance representing the expression \(x^{2} + 2x + 3\) to a Polynomial instance representing the same expression: >>> p1d = np.poly1d([1, 2, 3]) >>> p = np.polynomial.Polynomial(p1d.coef[::-1]) In addition to the coef attribute, polynomials from the polynomial package also have domain and window attributes. These attributes are most relevant when fitting polynomials to data, though it should be noted that polynomials with different domain and window attributes are not considered equal, and can’t be mixed in arithmetic: >>> p1 = np.polynomial.Polynomial([1, 2, 3]) >>> p1 Polynomial([1., 2., 3.], domain=[-1, 1], window=[-1, 1]) >>> p2 = np.polynomial.Polynomial([1, 2, 3], domain=[-2, 2]) >>> p1 == p2 False >>> p1 + p2 Traceback (most recent call last): ... TypeError: Domains differ See the documentation for the convenience classes for further details on the domain and window attributes. Another major difference between the legacy polynomial module and the polynomial package is polynomial fitting. In the old module, fitting was done via the polyfit function. In the polynomial package, the fit class method is preferred. For example, consider a simple linear fit to the following data: In [1]: rng = np.random.default_rng() In [2]: x = np.arange(10) In [3]: y = np.arange(10) + rng.standard_normal(10) With the legacy polynomial module, a linear fit (i.e. polynomial of degree 1) could be applied to these data with polyfit: In [4]: np.polyfit(x, y, deg=1) Out[4]: array([0.89865114, 0.25736425]) With the new polynomial API, the fit class method is preferred: In [5]: p_fitted = np.polynomial.Polynomial.fit(x, y, deg=1) In [6]: p_fitted Out[6]: Polynomial([4.30129436, 4.04393012], domain=[0., 9.], window=[-1., 1.]) Note that the coefficients are given in the scaled domain defined by the linear mapping between the window and domain. convert can be used to get the coefficients in the unscaled data domain. In [7]: p_fitted.convert() Out[7]: Polynomial([0.25736425, 0.89865114], domain=[-1., 1.], window=[-1., 1.]) Documentation for the polynomial Package In addition to standard power series polynomials, the polynomial package provides several additional kinds of polynomials including Chebyshev, Hermite (two subtypes), Laguerre, and Legendre polynomials. Each of these has an associated convenience class available from the numpy.polynomial namespace that provides a consistent interface for working with polynomials regardless of their type. Using the Convenience Classes Documentation pertaining to specific functions defined for each kind of polynomial individually can be found in the corresponding module documentation: Power Series (numpy.polynomial.polynomial) Chebyshev Series (numpy.polynomial.chebyshev) Hermite Series, “Physicists” (numpy.polynomial.hermite) HermiteE Series, “Probabilists” (numpy.polynomial.hermite_e) Laguerre Series (numpy.polynomial.laguerre) Legendre Series (numpy.polynomial.legendre) Polyutils Documentation for Legacy Polynomials Poly1d Basics Fitting Calculus Arithmetic Warnings
numpy.reference.routines.polynomials
Chebyshev Series (numpy.polynomial.chebyshev) This module provides a number of objects (mostly functions) useful for dealing with Chebyshev series, including a Chebyshev class that encapsulates the usual arithmetic operations. (General information on how this module represents and works with such polynomials is in the docstring for its “parent” sub-package, numpy.polynomial). Classes Chebyshev(coef[, domain, window]) A Chebyshev series class. Constants chebdomain An array object represents a multidimensional, homogeneous array of fixed-size items. chebzero An array object represents a multidimensional, homogeneous array of fixed-size items. chebone An array object represents a multidimensional, homogeneous array of fixed-size items. chebx An array object represents a multidimensional, homogeneous array of fixed-size items. Arithmetic chebadd(c1, c2) Add one Chebyshev series to another. chebsub(c1, c2) Subtract one Chebyshev series from another. chebmulx(c) Multiply a Chebyshev series by x. chebmul(c1, c2) Multiply one Chebyshev series by another. chebdiv(c1, c2) Divide one Chebyshev series by another. chebpow(c, pow[, maxpower]) Raise a Chebyshev series to a power. chebval(x, c[, tensor]) Evaluate a Chebyshev series at points x. chebval2d(x, y, c) Evaluate a 2-D Chebyshev series at points (x, y). chebval3d(x, y, z, c) Evaluate a 3-D Chebyshev series at points (x, y, z). chebgrid2d(x, y, c) Evaluate a 2-D Chebyshev series on the Cartesian product of x and y. chebgrid3d(x, y, z, c) Evaluate a 3-D Chebyshev series on the Cartesian product of x, y, and z. Calculus chebder(c[, m, scl, axis]) Differentiate a Chebyshev series. chebint(c[, m, k, lbnd, scl, axis]) Integrate a Chebyshev series. Misc Functions chebfromroots(roots) Generate a Chebyshev series with given roots. chebroots(c) Compute the roots of a Chebyshev series. chebvander(x, deg) Pseudo-Vandermonde matrix of given degree. chebvander2d(x, y, deg) Pseudo-Vandermonde matrix of given degrees. chebvander3d(x, y, z, deg) Pseudo-Vandermonde matrix of given degrees. chebgauss(deg) Gauss-Chebyshev quadrature. chebweight(x) The weight function of the Chebyshev polynomials. chebcompanion(c) Return the scaled companion matrix of c. chebfit(x, y, deg[, rcond, full, w]) Least squares fit of Chebyshev series to data. chebpts1(npts) Chebyshev points of the first kind. chebpts2(npts) Chebyshev points of the second kind. chebtrim(c[, tol]) Remove "small" "trailing" coefficients from a polynomial. chebline(off, scl) Chebyshev series whose graph is a straight line. cheb2poly(c) Convert a Chebyshev series to a polynomial. poly2cheb(pol) Convert a polynomial to a Chebyshev series. chebinterpolate(func, deg[, args]) Interpolate a function at the Chebyshev points of the first kind. See also numpy.polynomial Notes The implementations of multiplication, division, integration, and differentiation use the algebraic identities [1]: \[\begin{split}T_n(x) = \frac{z^n + z^{-n}}{2} \\ z\frac{dx}{dz} = \frac{z - z^{-1}}{2}.\end{split}\] where \[x = \frac{z + z^{-1}}{2}.\] These identities allow a Chebyshev series to be expressed as a finite, symmetric Laurent series. In this module, this sort of Laurent series is referred to as a “z-series.” References 1 A. T. Benjamin, et al., “Combinatorial Trigonometry with Chebyshev Polynomials,” Journal of Statistical Planning and Inference 14, 2008 (https://web.archive.org/web/20080221202153/https://www.math.hmc.edu/~benjamin/papers/CombTrig.pdf, pg. 4)
numpy.reference.routines.polynomials.chebyshev
add_data_dir(data_path)[source] Recursively add files under data_path to data_files list. Recursively add files under data_path to the list of data_files to be installed (and distributed). The data_path can be either a relative path-name, or an absolute path-name, or a 2-tuple where the first argument shows where in the install directory the data directory should be installed to. Parameters data_pathseq or str Argument can be either 2-sequence (<datadir suffix>, <path to data directory>) path to data directory where python datadir suffix defaults to package dir. Notes Rules for installation paths: foo/bar -> (foo/bar, foo/bar) -> parent/foo/bar (gun, foo/bar) -> parent/gun foo/* -> (foo/a, foo/a), (foo/b, foo/b) -> parent/foo/a, parent/foo/b (gun, foo/*) -> (gun, foo/a), (gun, foo/b) -> gun (gun/*, foo/*) -> parent/gun/a, parent/gun/b /foo/bar -> (bar, /foo/bar) -> parent/bar (gun, /foo/bar) -> parent/gun (fun/*/gun/*, sun/foo/bar) -> parent/fun/foo/gun/bar Examples For example suppose the source directory contains fun/foo.dat and fun/bar/car.dat: >>> self.add_data_dir('fun') >>> self.add_data_dir(('sun', 'fun')) >>> self.add_data_dir(('gun', '/full/path/to/fun')) Will install data-files to the locations: <package install directory>/ fun/ foo.dat bar/ car.dat sun/ foo.dat bar/ car.dat gun/ foo.dat car.dat
numpy.reference.distutils#numpy.distutils.misc_util.Configuration.add_data_dir
add_data_files(*files)[source] Add data files to configuration data_files. Parameters filessequence Argument(s) can be either 2-sequence (<datadir prefix>,<path to data file(s)>) paths to data files where python datadir prefix defaults to package dir. Notes The form of each element of the files sequence is very flexible allowing many combinations of where to get the files from the package and where they should ultimately be installed on the system. The most basic usage is for an element of the files argument sequence to be a simple filename. This will cause that file from the local path to be installed to the installation path of the self.name package (package path). The file argument can also be a relative path in which case the entire relative path will be installed into the package directory. Finally, the file can be an absolute path name in which case the file will be found at the absolute path name but installed to the package path. This basic behavior can be augmented by passing a 2-tuple in as the file argument. The first element of the tuple should specify the relative path (under the package install directory) where the remaining sequence of files should be installed to (it has nothing to do with the file-names in the source distribution). The second element of the tuple is the sequence of files that should be installed. The files in this sequence can be filenames, relative paths, or absolute paths. For absolute paths the file will be installed in the top-level package installation directory (regardless of the first argument). Filenames and relative path names will be installed in the package install directory under the path name given as the first element of the tuple. Rules for installation paths: file.txt -> (., file.txt)-> parent/file.txt foo/file.txt -> (foo, foo/file.txt) -> parent/foo/file.txt /foo/bar/file.txt -> (., /foo/bar/file.txt) -> parent/file.txt *.txt -> parent/a.txt, parent/b.txt foo/*.txt`` -> parent/foo/a.txt, parent/foo/b.txt */*.txt -> (*, */*.txt) -> parent/c/a.txt, parent/d/b.txt (sun, file.txt) -> parent/sun/file.txt (sun, bar/file.txt) -> parent/sun/file.txt (sun, /foo/bar/file.txt) -> parent/sun/file.txt (sun, *.txt) -> parent/sun/a.txt, parent/sun/b.txt (sun, bar/*.txt) -> parent/sun/a.txt, parent/sun/b.txt (sun/*, */*.txt) -> parent/sun/c/a.txt, parent/d/b.txt An additional feature is that the path to a data-file can actually be a function that takes no arguments and returns the actual path(s) to the data-files. This is useful when the data files are generated while building the package. Examples Add files to the list of data_files to be included with the package. >>> self.add_data_files('foo.dat', ... ('fun', ['gun.dat', 'nun/pun.dat', '/tmp/sun.dat']), ... 'bar/cat.dat', ... '/full/path/to/can.dat') will install these data files to: <package install directory>/ foo.dat fun/ gun.dat nun/ pun.dat sun.dat bar/ car.dat can.dat where <package install directory> is the package (or sub-package) directory such as ‘/usr/lib/python2.4/site-packages/mypackage’ (‘C: Python2.4 Lib site-packages mypackage’) or ‘/usr/lib/python2.4/site- packages/mypackage/mysubpackage’ (‘C: Python2.4 Lib site-packages mypackage mysubpackage’).
numpy.reference.distutils#numpy.distutils.misc_util.Configuration.add_data_files
add_extension(name, sources, **kw)[source] Add extension to configuration. Create and add an Extension instance to the ext_modules list. This method also takes the following optional keyword arguments that are passed on to the Extension constructor. Parameters namestr name of the extension sourcesseq list of the sources. The list of sources may contain functions (called source generators) which must take an extension instance and a build directory as inputs and return a source file or list of source files or None. If None is returned then no sources are generated. If the Extension instance has no sources after processing all source generators, then no extension module is built. include_dirs : define_macros : undef_macros : library_dirs : libraries : runtime_library_dirs : extra_objects : extra_compile_args : extra_link_args : extra_f77_compile_args : extra_f90_compile_args : export_symbols : swig_opts : depends : The depends list contains paths to files or directories that the sources of the extension module depend on. If any path in the depends list is newer than the extension module, then the module will be rebuilt. language : f2py_options : module_dirs : extra_infodict or list dict or list of dict of keywords to be appended to keywords. Notes The self.paths(…) method is applied to all lists that may contain paths.
numpy.reference.distutils#numpy.distutils.misc_util.Configuration.add_extension
add_headers(*files)[source] Add installable headers to configuration. Add the given sequence of files to the beginning of the headers list. By default, headers will be installed under <python- include>/<self.name.replace(‘.’,’/’)>/ directory. If an item of files is a tuple, then its first argument specifies the actual installation location relative to the <python-include> path. Parameters filesstr or seq Argument(s) can be either: 2-sequence (<includedir suffix>,<path to header file(s)>) path(s) to header file(s) where python includedir suffix will default to package name.
numpy.reference.distutils#numpy.distutils.misc_util.Configuration.add_headers
add_include_dirs(*paths)[source] Add paths to configuration include directories. Add the given sequence of paths to the beginning of the include_dirs list. This list will be visible to all extension modules of the current package.
numpy.reference.distutils#numpy.distutils.misc_util.Configuration.add_include_dirs
add_installed_library(name, sources, install_dir, build_info=None)[source] Similar to add_library, but the specified library is installed. Most C libraries used with distutils are only used to build python extensions, but libraries built through this method will be installed so that they can be reused by third-party packages. Parameters namestr Name of the installed library. sourcessequence List of the library’s source files. See add_library for details. install_dirstr Path to install the library, relative to the current sub-package. build_infodict, optional The following keys are allowed: depends macros include_dirs extra_compiler_args extra_f77_compile_args extra_f90_compile_args f2py_options language Returns None See also add_library, add_npy_pkg_config, get_info Notes The best way to encode the options required to link against the specified C libraries is to use a “libname.ini” file, and use get_info to retrieve the required options (see add_npy_pkg_config for more information).
numpy.reference.distutils#numpy.distutils.misc_util.Configuration.add_installed_library
add_library(name, sources, **build_info)[source] Add library to configuration. Parameters namestr Name of the extension. sourcessequence List of the sources. The list of sources may contain functions (called source generators) which must take an extension instance and a build directory as inputs and return a source file or list of source files or None. If None is returned then no sources are generated. If the Extension instance has no sources after processing all source generators, then no extension module is built. build_infodict, optional The following keys are allowed: depends macros include_dirs extra_compiler_args extra_f77_compile_args extra_f90_compile_args f2py_options language
numpy.reference.distutils#numpy.distutils.misc_util.Configuration.add_library
add_npy_pkg_config(template, install_dir, subst_dict=None)[source] Generate and install a npy-pkg config file from a template. The config file generated from template is installed in the given install directory, using subst_dict for variable substitution. Parameters templatestr The path of the template, relatively to the current package path. install_dirstr Where to install the npy-pkg config file, relatively to the current package path. subst_dictdict, optional If given, any string of the form @key@ will be replaced by subst_dict[key] in the template file when installed. The install prefix is always available through the variable @prefix@, since the install prefix is not easy to get reliably from setup.py. See also add_installed_library, get_info Notes This works for both standard installs and in-place builds, i.e. the @prefix@ refer to the source directory for in-place builds. Examples config.add_npy_pkg_config('foo.ini.in', 'lib', {'foo': bar}) Assuming the foo.ini.in file has the following content: [meta] Name=@foo@ Version=1.0 Description=dummy description [default] Cflags=-I@prefix@/include Libs= The generated file will have the following content: [meta] Name=bar Version=1.0 Description=dummy description [default] Cflags=-Iprefix_dir/include Libs= and will be installed as foo.ini in the ‘lib’ subpath. When cross-compiling with numpy distutils, it might be necessary to use modified npy-pkg-config files. Using the default/generated files will link with the host libraries (i.e. libnpymath.a). For cross-compilation you of-course need to link with target libraries, while using the host Python installation. You can copy out the numpy/core/lib/npy-pkg-config directory, add a pkgdir value to the .ini files and set NPY_PKG_CONFIG_PATH environment variable to point to the directory with the modified npy-pkg-config files. Example npymath.ini modified for cross-compilation: [meta] Name=npymath Description=Portable, core math library implementing C99 standard Version=0.1 [variables] pkgname=numpy.core pkgdir=/build/arm-linux-gnueabi/sysroot/usr/lib/python3.7/site-packages/numpy/core prefix=${pkgdir} libdir=${prefix}/lib includedir=${prefix}/include [default] Libs=-L${libdir} -lnpymath Cflags=-I${includedir} Requires=mlib [msvc] Libs=/LIBPATH:${libdir} npymath.lib Cflags=/INCLUDE:${includedir} Requires=mlib
numpy.reference.distutils#numpy.distutils.misc_util.Configuration.add_npy_pkg_config
add_scripts(*files)[source] Add scripts to configuration. Add the sequence of files to the beginning of the scripts list. Scripts will be installed under the <prefix>/bin/ directory.
numpy.reference.distutils#numpy.distutils.misc_util.Configuration.add_scripts
add_subpackage(subpackage_name, subpackage_path=None, standalone=False)[source] Add a sub-package to the current Configuration instance. This is useful in a setup.py script for adding sub-packages to a package. Parameters subpackage_namestr name of the subpackage subpackage_pathstr if given, the subpackage path such as the subpackage is in subpackage_path / subpackage_name. If None,the subpackage is assumed to be located in the local path / subpackage_name. standalonebool
numpy.reference.distutils#numpy.distutils.misc_util.Configuration.add_subpackage
numpy.broadcast.index attribute broadcast.index current index in broadcasted result Examples >>> x = np.array([[1], [2], [3]]) >>> y = np.array([4, 5, 6]) >>> b = np.broadcast(x, y) >>> b.index 0 >>> next(b), next(b), next(b) ((1, 4), (1, 5), (1, 6)) >>> b.index 3
numpy.reference.generated.numpy.broadcast.index
numpy.broadcast.iters attribute broadcast.iters tuple of iterators along self’s “components.” Returns a tuple of numpy.flatiter objects, one for each “component” of self. See also numpy.flatiter Examples >>> x = np.array([1, 2, 3]) >>> y = np.array([[4], [5], [6]]) >>> b = np.broadcast(x, y) >>> row, col = b.iters >>> next(row), next(col) (1, 4)
numpy.reference.generated.numpy.broadcast.iters
numpy.broadcast.nd attribute broadcast.nd Number of dimensions of broadcasted result. For code intended for NumPy 1.12.0 and later the more consistent ndim is preferred. Examples >>> x = np.array([1, 2, 3]) >>> y = np.array([[4], [5], [6]]) >>> b = np.broadcast(x, y) >>> b.nd 2
numpy.reference.generated.numpy.broadcast.nd
numpy.broadcast.ndim attribute broadcast.ndim Number of dimensions of broadcasted result. Alias for nd. New in version 1.12.0. Examples >>> x = np.array([1, 2, 3]) >>> y = np.array([[4], [5], [6]]) >>> b = np.broadcast(x, y) >>> b.ndim 2
numpy.reference.generated.numpy.broadcast.ndim
numpy.broadcast.numiter attribute broadcast.numiter Number of iterators possessed by the broadcasted result. Examples >>> x = np.array([1, 2, 3]) >>> y = np.array([[4], [5], [6]]) >>> b = np.broadcast(x, y) >>> b.numiter 2
numpy.reference.generated.numpy.broadcast.numiter
numpy.broadcast.reset method broadcast.reset() Reset the broadcasted result’s iterator(s). Parameters None Returns None Examples >>> x = np.array([1, 2, 3]) >>> y = np.array([[4], [5], [6]]) >>> b = np.broadcast(x, y) >>> b.index 0 >>> next(b), next(b), next(b) ((1, 4), (2, 4), (3, 4)) >>> b.index 3 >>> b.reset() >>> b.index 0
numpy.reference.generated.numpy.broadcast.reset
numpy.broadcast.size attribute broadcast.size Total size of broadcasted result. Examples >>> x = np.array([1, 2, 3]) >>> y = np.array([[4], [5], [6]]) >>> b = np.broadcast(x, y) >>> b.size 9
numpy.reference.generated.numpy.broadcast.size
Using via numpy.distutils numpy.distutils is part of NumPy, and extends the standard Python distutils module to deal with Fortran sources and F2PY signature files, e.g. compile Fortran sources, call F2PY to construct extension modules, etc. Example Consider the following setup_file.py for the fib and scalar examples from Three ways to wrap - getting started section: from numpy.distutils.core import Extension ext1 = Extension(name = 'scalar', sources = ['scalar.f']) ext2 = Extension(name = 'fib2', sources = ['fib2.pyf', 'fib1.f']) if __name__ == "__main__": from numpy.distutils.core import setup setup(name = 'f2py_example', description = "F2PY Users Guide examples", author = "Pearu Peterson", author_email = "[email protected]", ext_modules = [ext1, ext2] ) # End of setup_example.py Running python setup_example.py build will build two extension modules scalar and fib2 to the build directory. Extensions to distutils numpy.distutils extends distutils with the following features: Extension class argument sources may contain Fortran source files. In addition, the list sources may contain at most one F2PY signature file, and in this case, the name of an Extension module must match with the <modulename> used in signature file. It is assumed that an F2PY signature file contains exactly one python module block. If sources do not contain a signature file, then F2PY is used to scan Fortran source files to construct wrappers to the Fortran codes. Additional options to the F2PY executable can be given using the Extension class argument f2py_options. The following new distutils commands are defined: build_src to construct Fortran wrapper extension modules, among many other things. config_fc to change Fortran compiler options. Additionally, the build_ext and build_clib commands are also enhanced to support Fortran sources. Run python <setup.py file> config_fc build_src build_ext --help to see available options for these commands. When building Python packages containing Fortran sources, one can choose different Fortran compilers by using the build_ext command option --fcompiler=<Vendor>. Here <Vendor> can be one of the following names (on linux systems): absoft compaq fujitsu g95 gnu gnu95 intel intele intelem lahey nag nagfor nv pathf95 pg vast See numpy_distutils/fcompiler.py for an up-to-date list of supported compilers for different platforms, or run python -m numpy.f2py -c --help-fcompiler
numpy.f2py.buildtools.distutils
numpy.busdaycalendar.holidays attribute busdaycalendar.holidays A copy of the holiday array indicating additional invalid days.
numpy.reference.generated.numpy.busdaycalendar.holidays
numpy.busdaycalendar.weekmask attribute busdaycalendar.weekmask A copy of the seven-element boolean mask indicating valid days.
numpy.reference.generated.numpy.busdaycalendar.weekmask
Byte-swapping Introduction to byte ordering and ndarrays The ndarray is an object that provide a python array interface to data in memory. It often happens that the memory that you want to view with an array is not of the same byte ordering as the computer on which you are running Python. For example, I might be working on a computer with a little-endian CPU - such as an Intel Pentium, but I have loaded some data from a file written by a computer that is big-endian. Let’s say I have loaded 4 bytes from a file written by a Sun (big-endian) computer. I know that these 4 bytes represent two 16-bit integers. On a big-endian machine, a two-byte integer is stored with the Most Significant Byte (MSB) first, and then the Least Significant Byte (LSB). Thus the bytes are, in memory order: MSB integer 1 LSB integer 1 MSB integer 2 LSB integer 2 Let’s say the two integers were in fact 1 and 770. Because 770 = 256 * 3 + 2, the 4 bytes in memory would contain respectively: 0, 1, 3, 2. The bytes I have loaded from the file would have these contents: >>> big_end_buffer = bytearray([0,1,3,2]) >>> big_end_buffer bytearray(b'\\x00\\x01\\x03\\x02') We might want to use an ndarray to access these integers. In that case, we can create an array around this memory, and tell numpy that there are two integers, and that they are 16 bit and big-endian: >>> import numpy as np >>> big_end_arr = np.ndarray(shape=(2,),dtype='>i2', buffer=big_end_buffer) >>> big_end_arr[0] 1 >>> big_end_arr[1] 770 Note the array dtype above of >i2. The > means ‘big-endian’ (< is little-endian) and i2 means ‘signed 2-byte integer’. For example, if our data represented a single unsigned 4-byte little-endian integer, the dtype string would be <u4. In fact, why don’t we try that? >>> little_end_u4 = np.ndarray(shape=(1,),dtype='<u4', buffer=big_end_buffer) >>> little_end_u4[0] == 1 * 256**1 + 3 * 256**2 + 2 * 256**3 True Returning to our big_end_arr - in this case our underlying data is big-endian (data endianness) and we’ve set the dtype to match (the dtype is also big-endian). However, sometimes you need to flip these around. Warning Scalars currently do not include byte order information, so extracting a scalar from an array will return an integer in native byte order. Hence: >>> big_end_arr[0].dtype.byteorder == little_end_u4[0].dtype.byteorder True Changing byte ordering As you can imagine from the introduction, there are two ways you can affect the relationship between the byte ordering of the array and the underlying memory it is looking at: Change the byte-ordering information in the array dtype so that it interprets the underlying data as being in a different byte order. This is the role of arr.newbyteorder() Change the byte-ordering of the underlying data, leaving the dtype interpretation as it was. This is what arr.byteswap() does. The common situations in which you need to change byte ordering are: Your data and dtype endianness don’t match, and you want to change the dtype so that it matches the data. Your data and dtype endianness don’t match, and you want to swap the data so that they match the dtype Your data and dtype endianness match, but you want the data swapped and the dtype to reflect this Data and dtype endianness don’t match, change dtype to match data We make something where they don’t match: >>> wrong_end_dtype_arr = np.ndarray(shape=(2,),dtype='<i2', buffer=big_end_buffer) >>> wrong_end_dtype_arr[0] 256 The obvious fix for this situation is to change the dtype so it gives the correct endianness: >>> fixed_end_dtype_arr = wrong_end_dtype_arr.newbyteorder() >>> fixed_end_dtype_arr[0] 1 Note the array has not changed in memory: >>> fixed_end_dtype_arr.tobytes() == big_end_buffer True Data and type endianness don’t match, change data to match dtype You might want to do this if you need the data in memory to be a certain ordering. For example you might be writing the memory out to a file that needs a certain byte ordering. >>> fixed_end_mem_arr = wrong_end_dtype_arr.byteswap() >>> fixed_end_mem_arr[0] 1 Now the array has changed in memory: >>> fixed_end_mem_arr.tobytes() == big_end_buffer False Data and dtype endianness match, swap data and dtype You may have a correctly specified array dtype, but you need the array to have the opposite byte order in memory, and you want the dtype to match so the array values make sense. In this case you just do both of the previous operations: >>> swapped_end_arr = big_end_arr.byteswap().newbyteorder() >>> swapped_end_arr[0] 1 >>> swapped_end_arr.tobytes() == big_end_buffer False An easier way of casting the data to a specific dtype and byte ordering can be achieved with the ndarray astype method: >>> swapped_end_arr = big_end_arr.astype('<i2') >>> swapped_end_arr[0] 1 >>> swapped_end_arr.tobytes() == big_end_buffer False
numpy.user.basics.byteswapping
numpy.char.add char.add(x1, x2)[source] Return element-wise string concatenation for two arrays of str or unicode. Arrays x1 and x2 must have the same shape. Parameters x1array_like of str or unicode Input array. x2array_like of str or unicode Input array. Returns addndarray Output array of string_ or unicode_, depending on input types of the same shape as x1 and x2.
numpy.reference.generated.numpy.char.add
numpy.char.array char.array(obj, itemsize=None, copy=True, unicode=None, order=None)[source] Create a chararray. Note This class is provided for numarray backward-compatibility. New code (not concerned with numarray compatibility) should use arrays of type string_ or unicode_ and use the free functions in numpy.char for fast vectorized string operations instead. Versus a regular NumPy array of type str or unicode, this class adds the following functionality: values automatically have whitespace removed from the end when indexed comparison operators automatically remove whitespace from the end when comparing values vectorized string operations are provided as methods (e.g. str.endswith) and infix operators (e.g. +, *, %) Parameters objarray of str or unicode-like itemsizeint, optional itemsize is the number of characters per scalar in the resulting array. If itemsize is None, and obj is an object array or a Python list, the itemsize will be automatically determined. If itemsize is provided and obj is of type str or unicode, then the obj string will be chunked into itemsize pieces. copybool, optional If true (default), then the object is copied. Otherwise, a copy will only be made if __array__ returns a copy, if obj is a nested sequence, or if a copy is needed to satisfy any of the other requirements (itemsize, unicode, order, etc.). unicodebool, optional When true, the resulting chararray can contain Unicode characters, when false only 8-bit characters. If unicode is None and obj is one of the following: a chararray, an ndarray of type str or unicode a Python str or unicode object, then the unicode setting of the output array will be automatically determined. order{‘C’, ‘F’, ‘A’}, optional Specify the order of the array. If order is ‘C’ (default), then the array will be in C-contiguous order (last-index varies the fastest). If order is ‘F’, then the returned array will be in Fortran-contiguous order (first-index varies the fastest). If order is ‘A’, then the returned array may be in any order (either C-, Fortran-contiguous, or even discontiguous).
numpy.reference.generated.numpy.char.array
numpy.char.asarray char.asarray(obj, itemsize=None, unicode=None, order=None)[source] Convert the input to a chararray, copying the data only if necessary. Versus a regular NumPy array of type str or unicode, this class adds the following functionality: values automatically have whitespace removed from the end when indexed comparison operators automatically remove whitespace from the end when comparing values vectorized string operations are provided as methods (e.g. str.endswith) and infix operators (e.g. +, *,``%``) Parameters objarray of str or unicode-like itemsizeint, optional itemsize is the number of characters per scalar in the resulting array. If itemsize is None, and obj is an object array or a Python list, the itemsize will be automatically determined. If itemsize is provided and obj is of type str or unicode, then the obj string will be chunked into itemsize pieces. unicodebool, optional When true, the resulting chararray can contain Unicode characters, when false only 8-bit characters. If unicode is None and obj is one of the following: a chararray, an ndarray of type str or ‘unicode` a Python str or unicode object, then the unicode setting of the output array will be automatically determined. order{‘C’, ‘F’}, optional Specify the order of the array. If order is ‘C’ (default), then the array will be in C-contiguous order (last-index varies the fastest). If order is ‘F’, then the returned array will be in Fortran-contiguous order (first-index varies the fastest).
numpy.reference.generated.numpy.char.asarray
numpy.char.capitalize char.capitalize(a)[source] Return a copy of a with only the first character of each element capitalized. Calls str.capitalize element-wise. For 8-bit strings, this method is locale-dependent. Parameters aarray_like of str or unicode Input array of strings to capitalize. Returns outndarray Output array of str or unicode, depending on input types See also str.capitalize Examples >>> c = np.array(['a1b2','1b2a','b2a1','2a1b'],'S4'); c array(['a1b2', '1b2a', 'b2a1', '2a1b'], dtype='|S4') >>> np.char.capitalize(c) array(['A1b2', '1b2a', 'B2a1', '2a1b'], dtype='|S4')
numpy.reference.generated.numpy.char.capitalize
numpy.char.center char.center(a, width, fillchar=' ')[source] Return a copy of a with its elements centered in a string of length width. Calls str.center element-wise. Parameters aarray_like of str or unicode widthint The length of the resulting strings fillcharstr or unicode, optional The padding character to use (default is space). Returns outndarray Output array of str or unicode, depending on input types See also str.center
numpy.reference.generated.numpy.char.center
numpy.char.chararray.argsort method char.chararray.argsort(axis=- 1, kind=None, order=None)[source] Returns the indices that would sort this array. Refer to numpy.argsort for full documentation. See also numpy.argsort equivalent function
numpy.reference.generated.numpy.char.chararray.argsort
numpy.char.chararray.astype method char.chararray.astype(dtype, order='K', casting='unsafe', subok=True, copy=True) Copy of the array, cast to a specified type. Parameters dtypestr or dtype Typecode or data-type to which the array is cast. order{‘C’, ‘F’, ‘A’, ‘K’}, optional Controls the memory layout order of the result. ‘C’ means C order, ‘F’ means Fortran order, ‘A’ means ‘F’ order if all the arrays are Fortran contiguous, ‘C’ order otherwise, and ‘K’ means as close to the order the array elements appear in memory as possible. Default is ‘K’. casting{‘no’, ‘equiv’, ‘safe’, ‘same_kind’, ‘unsafe’}, optional Controls what kind of data casting may occur. Defaults to ‘unsafe’ for backwards compatibility. ‘no’ means the data types should not be cast at all. ‘equiv’ means only byte-order changes are allowed. ‘safe’ means only casts which can preserve values are allowed. ‘same_kind’ means only safe casts or casts within a kind, like float64 to float32, are allowed. ‘unsafe’ means any data conversions may be done. subokbool, optional If True, then sub-classes will be passed-through (default), otherwise the returned array will be forced to be a base-class array. copybool, optional By default, astype always returns a newly allocated array. If this is set to false, and the dtype, order, and subok requirements are satisfied, the input array is returned instead of a copy. Returns arr_tndarray Unless copy is False and the other conditions for returning the input array are satisfied (see description for copy input parameter), arr_t is a new array of the same shape as the input array, with dtype, order given by dtype, order. Raises ComplexWarning When casting from complex to float or int. To avoid this, one should use a.real.astype(t). Notes Changed in version 1.17.0: Casting between a simple data type and a structured one is possible only for “unsafe” casting. Casting to multiple fields is allowed, but casting from multiple fields is not. Changed in version 1.9.0: Casting from numeric to string types in ‘safe’ casting mode requires that the string dtype length is long enough to store the max integer/float value converted. Examples >>> x = np.array([1, 2, 2.5]) >>> x array([1. , 2. , 2.5]) >>> x.astype(int) array([1, 2, 2])
numpy.reference.generated.numpy.char.chararray.astype
numpy.char.chararray.base attribute char.chararray.base Base object if memory is from some other object. Examples The base of an array that owns its memory is None: >>> x = np.array([1,2,3,4]) >>> x.base is None True Slicing creates a view, whose memory is shared with x: >>> y = x[2:] >>> y.base is x True
numpy.reference.generated.numpy.char.chararray.base
numpy.char.chararray.copy method char.chararray.copy(order='C') Return a copy of the array. Parameters order{‘C’, ‘F’, ‘A’, ‘K’}, optional Controls the memory layout of the copy. ‘C’ means C-order, ‘F’ means F-order, ‘A’ means ‘F’ if a is Fortran contiguous, ‘C’ otherwise. ‘K’ means match the layout of a as closely as possible. (Note that this function and numpy.copy are very similar but have different default values for their order= arguments, and this function always passes sub-classes through.) See also numpy.copy Similar function with different default behavior numpy.copyto Notes This function is the preferred method for creating an array copy. The function numpy.copy is similar, but it defaults to using order ‘K’, and will not pass sub-classes through by default. Examples >>> x = np.array([[1,2,3],[4,5,6]], order='F') >>> y = x.copy() >>> x.fill(0) >>> x array([[0, 0, 0], [0, 0, 0]]) >>> y array([[1, 2, 3], [4, 5, 6]]) >>> y.flags['C_CONTIGUOUS'] True
numpy.reference.generated.numpy.char.chararray.copy
numpy.char.chararray.count method char.chararray.count(sub, start=0, end=None)[source] Returns an array with the number of non-overlapping occurrences of substring sub in the range [start, end]. See also char.count
numpy.reference.generated.numpy.char.chararray.count
numpy.char.chararray.ctypes attribute char.chararray.ctypes An object to simplify the interaction of the array with the ctypes module. This attribute creates an object that makes it easier to use arrays when calling shared libraries with the ctypes module. The returned object has, among others, data, shape, and strides attributes (see Notes below) which themselves return ctypes objects that can be used as arguments to a shared library. Parameters None Returns cPython object Possessing attributes data, shape, strides, etc. See also numpy.ctypeslib Notes Below are the public attributes of this object which were documented in “Guide to NumPy” (we have omitted undocumented public attributes, as well as documented private attributes): _ctypes.data A pointer to the memory area of the array as a Python integer. This memory area may contain data that is not aligned, or not in correct byte-order. The memory area may not even be writeable. The array flags and data-type of this array should be respected when passing this attribute to arbitrary C-code to avoid trouble that can include Python crashing. User Beware! The value of this attribute is exactly the same as self._array_interface_['data'][0]. Note that unlike data_as, a reference will not be kept to the array: code like ctypes.c_void_p((a + b).ctypes.data) will result in a pointer to a deallocated array, and should be spelt (a + b).ctypes.data_as(ctypes.c_void_p) _ctypes.shape (c_intp*self.ndim): A ctypes array of length self.ndim where the basetype is the C-integer corresponding to dtype('p') on this platform (see c_intp). This base-type could be ctypes.c_int, ctypes.c_long, or ctypes.c_longlong depending on the platform. The ctypes array contains the shape of the underlying array. _ctypes.strides (c_intp*self.ndim): A ctypes array of length self.ndim where the basetype is the same as for the shape attribute. This ctypes array contains the strides information from the underlying array. This strides information is important for showing how many bytes must be jumped to get to the next element in the array. _ctypes.data_as(obj)[source] Return the data pointer cast to a particular c-types object. For example, calling self._as_parameter_ is equivalent to self.data_as(ctypes.c_void_p). Perhaps you want to use the data as a pointer to a ctypes array of floating-point data: self.data_as(ctypes.POINTER(ctypes.c_double)). The returned pointer will keep a reference to the array. _ctypes.shape_as(obj)[source] Return the shape tuple as an array of some other c-types type. For example: self.shape_as(ctypes.c_short). _ctypes.strides_as(obj)[source] Return the strides tuple as an array of some other c-types type. For example: self.strides_as(ctypes.c_longlong). If the ctypes module is not available, then the ctypes attribute of array objects still returns something useful, but ctypes objects are not returned and errors may be raised instead. In particular, the object will still have the as_parameter attribute which will return an integer equal to the data attribute. Examples >>> import ctypes >>> x = np.array([[0, 1], [2, 3]], dtype=np.int32) >>> x array([[0, 1], [2, 3]], dtype=int32) >>> x.ctypes.data 31962608 # may vary >>> x.ctypes.data_as(ctypes.POINTER(ctypes.c_uint32)) <__main__.LP_c_uint object at 0x7ff2fc1fc200> # may vary >>> x.ctypes.data_as(ctypes.POINTER(ctypes.c_uint32)).contents c_uint(0) >>> x.ctypes.data_as(ctypes.POINTER(ctypes.c_uint64)).contents c_ulong(4294967296) >>> x.ctypes.shape <numpy.core._internal.c_long_Array_2 object at 0x7ff2fc1fce60> # may vary >>> x.ctypes.strides <numpy.core._internal.c_long_Array_2 object at 0x7ff2fc1ff320> # may vary
numpy.reference.generated.numpy.char.chararray.ctypes
numpy.char.chararray.data attribute char.chararray.data Python buffer object pointing to the start of the array’s data.
numpy.reference.generated.numpy.char.chararray.data
numpy.char.chararray.decode method char.chararray.decode(encoding=None, errors=None)[source] Calls str.decode element-wise. See also char.decode
numpy.reference.generated.numpy.char.chararray.decode
numpy.char.chararray.dtype attribute char.chararray.dtype Data-type of the array’s elements. Parameters None Returns dnumpy dtype object See also numpy.dtype Examples >>> x array([[0, 1], [2, 3]]) >>> x.dtype dtype('int32') >>> type(x.dtype) <type 'numpy.dtype'>
numpy.reference.generated.numpy.char.chararray.dtype
numpy.char.chararray.dump method char.chararray.dump(file) Dump a pickle of the array to the specified file. The array can be read back with pickle.load or numpy.load. Parameters filestr or Path A string naming the dump file. Changed in version 1.17.0: pathlib.Path objects are now accepted.
numpy.reference.generated.numpy.char.chararray.dump
numpy.char.chararray.dumps method char.chararray.dumps() Returns the pickle of the array as a string. pickle.loads will convert the string back to an array. Parameters None
numpy.reference.generated.numpy.char.chararray.dumps
numpy.char.chararray.encode method char.chararray.encode(encoding=None, errors=None)[source] Calls str.encode element-wise. See also char.encode
numpy.reference.generated.numpy.char.chararray.encode
numpy.char.chararray.endswith method char.chararray.endswith(suffix, start=0, end=None)[source] Returns a boolean array which is True where the string element in self ends with suffix, otherwise False. See also char.endswith
numpy.reference.generated.numpy.char.chararray.endswith
numpy.char.chararray.expandtabs method char.chararray.expandtabs(tabsize=8)[source] Return a copy of each string element where all tab characters are replaced by one or more spaces. See also char.expandtabs
numpy.reference.generated.numpy.char.chararray.expandtabs
numpy.char.chararray.fill method char.chararray.fill(value) Fill the array with a scalar value. Parameters valuescalar All elements of a will be assigned this value. Examples >>> a = np.array([1, 2]) >>> a.fill(0) >>> a array([0, 0]) >>> a = np.empty(2) >>> a.fill(1) >>> a array([1., 1.])
numpy.reference.generated.numpy.char.chararray.fill
numpy.char.chararray.find method char.chararray.find(sub, start=0, end=None)[source] For each element, return the lowest index in the string where substring sub is found. See also char.find
numpy.reference.generated.numpy.char.chararray.find
numpy.char.chararray.flags attribute char.chararray.flags Information about the memory layout of the array. Notes The flags object can be accessed dictionary-like (as in a.flags['WRITEABLE']), or by using lowercased attribute names (as in a.flags.writeable). Short flag names are only supported in dictionary access. Only the WRITEBACKIFCOPY, UPDATEIFCOPY, WRITEABLE, and ALIGNED flags can be changed by the user, via direct assignment to the attribute or dictionary entry, or by calling ndarray.setflags. The array flags cannot be set arbitrarily: UPDATEIFCOPY can only be set False. WRITEBACKIFCOPY can only be set False. ALIGNED can only be set True if the data is truly aligned. WRITEABLE can only be set True if the array owns its own memory or the ultimate owner of the memory exposes a writeable buffer interface or is a string. Arrays can be both C-style and Fortran-style contiguous simultaneously. This is clear for 1-dimensional arrays, but can also be true for higher dimensional arrays. Even for contiguous arrays a stride for a given dimension arr.strides[dim] may be arbitrary if arr.shape[dim] == 1 or the array has no elements. It does not generally hold that self.strides[-1] == self.itemsize for C-style contiguous arrays or self.strides[0] == self.itemsize for Fortran-style contiguous arrays is true. Attributes C_CONTIGUOUS (C) The data is in a single, C-style contiguous segment. F_CONTIGUOUS (F) The data is in a single, Fortran-style contiguous segment. OWNDATA (O) The array owns the memory it uses or borrows it from another object. WRITEABLE (W) The data area can be written to. Setting this to False locks the data, making it read-only. A view (slice, etc.) inherits WRITEABLE from its base array at creation time, but a view of a writeable array may be subsequently locked while the base array remains writeable. (The opposite is not true, in that a view of a locked array may not be made writeable. However, currently, locking a base object does not lock any views that already reference it, so under that circumstance it is possible to alter the contents of a locked array via a previously created writeable view onto it.) Attempting to change a non-writeable array raises a RuntimeError exception. ALIGNED (A) The data and all elements are aligned appropriately for the hardware. WRITEBACKIFCOPY (X) This array is a copy of some other array. The C-API function PyArray_ResolveWritebackIfCopy must be called before deallocating to the base array will be updated with the contents of this array. UPDATEIFCOPY (U) (Deprecated, use WRITEBACKIFCOPY) This array is a copy of some other array. When this array is deallocated, the base array will be updated with the contents of this array. FNC F_CONTIGUOUS and not C_CONTIGUOUS. FORC F_CONTIGUOUS or C_CONTIGUOUS (one-segment test). BEHAVED (B) ALIGNED and WRITEABLE. CARRAY (CA) BEHAVED and C_CONTIGUOUS. FARRAY (FA) BEHAVED and F_CONTIGUOUS and not C_CONTIGUOUS.
numpy.reference.generated.numpy.char.chararray.flags
numpy.char.chararray.flat attribute char.chararray.flat A 1-D iterator over the array. This is a numpy.flatiter instance, which acts similarly to, but is not a subclass of, Python’s built-in iterator object. See also flatten Return a copy of the array collapsed into one dimension. flatiter Examples >>> x = np.arange(1, 7).reshape(2, 3) >>> x array([[1, 2, 3], [4, 5, 6]]) >>> x.flat[3] 4 >>> x.T array([[1, 4], [2, 5], [3, 6]]) >>> x.T.flat[3] 5 >>> type(x.flat) <class 'numpy.flatiter'> An assignment example: >>> x.flat = 3; x array([[3, 3, 3], [3, 3, 3]]) >>> x.flat[[1,4]] = 1; x array([[3, 1, 3], [3, 1, 3]])
numpy.reference.generated.numpy.char.chararray.flat
numpy.char.chararray.flatten method char.chararray.flatten(order='C') Return a copy of the array collapsed into one dimension. Parameters order{‘C’, ‘F’, ‘A’, ‘K’}, optional ‘C’ means to flatten in row-major (C-style) order. ‘F’ means to flatten in column-major (Fortran- style) order. ‘A’ means to flatten in column-major order if a is Fortran contiguous in memory, row-major order otherwise. ‘K’ means to flatten a in the order the elements occur in memory. The default is ‘C’. Returns yndarray A copy of the input array, flattened to one dimension. See also ravel Return a flattened array. flat A 1-D flat iterator over the array. Examples >>> a = np.array([[1,2], [3,4]]) >>> a.flatten() array([1, 2, 3, 4]) >>> a.flatten('F') array([1, 3, 2, 4])
numpy.reference.generated.numpy.char.chararray.flatten
numpy.char.chararray.getfield method char.chararray.getfield(dtype, offset=0) Returns a field of the given array as a certain type. A field is a view of the array data with a given data-type. The values in the view are determined by the given type and the offset into the current array in bytes. The offset needs to be such that the view dtype fits in the array dtype; for example an array of dtype complex128 has 16-byte elements. If taking a view with a 32-bit integer (4 bytes), the offset needs to be between 0 and 12 bytes. Parameters dtypestr or dtype The data type of the view. The dtype size of the view can not be larger than that of the array itself. offsetint Number of bytes to skip before beginning the element view. Examples >>> x = np.diag([1.+1.j]*2) >>> x[1, 1] = 2 + 4.j >>> x array([[1.+1.j, 0.+0.j], [0.+0.j, 2.+4.j]]) >>> x.getfield(np.float64) array([[1., 0.], [0., 2.]]) By choosing an offset of 8 bytes we can select the complex part of the array for our view: >>> x.getfield(np.float64, offset=8) array([[1., 0.], [0., 4.]])
numpy.reference.generated.numpy.char.chararray.getfield
numpy.char.chararray.imag attribute char.chararray.imag The imaginary part of the array. Examples >>> x = np.sqrt([1+0j, 0+1j]) >>> x.imag array([ 0. , 0.70710678]) >>> x.imag.dtype dtype('float64')
numpy.reference.generated.numpy.char.chararray.imag
numpy.char.chararray.index method char.chararray.index(sub, start=0, end=None)[source] Like find, but raises ValueError when the substring is not found. See also char.index
numpy.reference.generated.numpy.char.chararray.index
numpy.char.chararray.isalnum method char.chararray.isalnum()[source] Returns true for each element if all characters in the string are alphanumeric and there is at least one character, false otherwise. See also char.isalnum
numpy.reference.generated.numpy.char.chararray.isalnum
numpy.char.chararray.isalpha method char.chararray.isalpha()[source] Returns true for each element if all characters in the string are alphabetic and there is at least one character, false otherwise. See also char.isalpha
numpy.reference.generated.numpy.char.chararray.isalpha
numpy.char.chararray.isdecimal method char.chararray.isdecimal()[source] For each element in self, return True if there are only decimal characters in the element. See also char.isdecimal
numpy.reference.generated.numpy.char.chararray.isdecimal
numpy.char.chararray.isdigit method char.chararray.isdigit()[source] Returns true for each element if all characters in the string are digits and there is at least one character, false otherwise. See also char.isdigit
numpy.reference.generated.numpy.char.chararray.isdigit
numpy.char.chararray.islower method char.chararray.islower()[source] Returns true for each element if all cased characters in the string are lowercase and there is at least one cased character, false otherwise. See also char.islower
numpy.reference.generated.numpy.char.chararray.islower
numpy.char.chararray.isnumeric method char.chararray.isnumeric()[source] For each element in self, return True if there are only numeric characters in the element. See also char.isnumeric
numpy.reference.generated.numpy.char.chararray.isnumeric
numpy.char.chararray.isspace method char.chararray.isspace()[source] Returns true for each element if there are only whitespace characters in the string and there is at least one character, false otherwise. See also char.isspace
numpy.reference.generated.numpy.char.chararray.isspace
numpy.char.chararray.istitle method char.chararray.istitle()[source] Returns true for each element if the element is a titlecased string and there is at least one character, false otherwise. See also char.istitle
numpy.reference.generated.numpy.char.chararray.istitle
numpy.char.chararray.isupper method char.chararray.isupper()[source] Returns true for each element if all cased characters in the string are uppercase and there is at least one character, false otherwise. See also char.isupper
numpy.reference.generated.numpy.char.chararray.isupper
numpy.char.chararray.item method char.chararray.item(*args) Copy an element of an array to a standard Python scalar and return it. Parameters *argsArguments (variable number and type) none: in this case, the method only works for arrays with one element (a.size == 1), which element is copied into a standard Python scalar object and returned. int_type: this argument is interpreted as a flat index into the array, specifying which element to copy and return. tuple of int_types: functions as does a single int_type argument, except that the argument is interpreted as an nd-index into the array. Returns zStandard Python scalar object A copy of the specified element of the array as a suitable Python scalar Notes When the data type of a is longdouble or clongdouble, item() returns a scalar array object because there is no available Python scalar that would not lose information. Void arrays return a buffer object for item(), unless fields are defined, in which case a tuple is returned. item is very similar to a[args], except, instead of an array scalar, a standard Python scalar is returned. This can be useful for speeding up access to elements of the array and doing arithmetic on elements of the array using Python’s optimized math. Examples >>> np.random.seed(123) >>> x = np.random.randint(9, size=(3, 3)) >>> x array([[2, 2, 6], [1, 3, 6], [1, 0, 1]]) >>> x.item(3) 1 >>> x.item(7) 0 >>> x.item((0, 1)) 2 >>> x.item((2, 2)) 1
numpy.reference.generated.numpy.char.chararray.item
numpy.char.chararray.itemsize attribute char.chararray.itemsize Length of one array element in bytes. Examples >>> x = np.array([1,2,3], dtype=np.float64) >>> x.itemsize 8 >>> x = np.array([1,2,3], dtype=np.complex128) >>> x.itemsize 16
numpy.reference.generated.numpy.char.chararray.itemsize
numpy.char.chararray.join method char.chararray.join(seq)[source] Return a string which is the concatenation of the strings in the sequence seq. See also char.join
numpy.reference.generated.numpy.char.chararray.join
numpy.char.chararray.ljust method char.chararray.ljust(width, fillchar=' ')[source] Return an array with the elements of self left-justified in a string of length width. See also char.ljust
numpy.reference.generated.numpy.char.chararray.ljust
numpy.char.chararray.lower method char.chararray.lower()[source] Return an array with the elements of self converted to lowercase. See also char.lower
numpy.reference.generated.numpy.char.chararray.lower
numpy.char.chararray.lstrip method char.chararray.lstrip(chars=None)[source] For each element in self, return a copy with the leading characters removed. See also char.lstrip
numpy.reference.generated.numpy.char.chararray.lstrip
numpy.char.chararray.nbytes attribute char.chararray.nbytes Total bytes consumed by the elements of the array. Notes Does not include memory consumed by non-element attributes of the array object. Examples >>> x = np.zeros((3,5,2), dtype=np.complex128) >>> x.nbytes 480 >>> np.prod(x.shape) * x.itemsize 480
numpy.reference.generated.numpy.char.chararray.nbytes
numpy.char.chararray.ndim attribute char.chararray.ndim Number of array dimensions. Examples >>> x = np.array([1, 2, 3]) >>> x.ndim 1 >>> y = np.zeros((2, 3, 4)) >>> y.ndim 3
numpy.reference.generated.numpy.char.chararray.ndim
numpy.char.chararray.nonzero method char.chararray.nonzero() Return the indices of the elements that are non-zero. Refer to numpy.nonzero for full documentation. See also numpy.nonzero equivalent function
numpy.reference.generated.numpy.char.chararray.nonzero
numpy.char.chararray.put method char.chararray.put(indices, values, mode='raise') Set a.flat[n] = values[n] for all n in indices. Refer to numpy.put for full documentation. See also numpy.put equivalent function
numpy.reference.generated.numpy.char.chararray.put
numpy.char.chararray.ravel method char.chararray.ravel([order]) Return a flattened array. Refer to numpy.ravel for full documentation. See also numpy.ravel equivalent function ndarray.flat a flat iterator on the array.
numpy.reference.generated.numpy.char.chararray.ravel
numpy.char.chararray.real attribute char.chararray.real The real part of the array. See also numpy.real equivalent function Examples >>> x = np.sqrt([1+0j, 0+1j]) >>> x.real array([ 1. , 0.70710678]) >>> x.real.dtype dtype('float64')
numpy.reference.generated.numpy.char.chararray.real
numpy.char.chararray.repeat method char.chararray.repeat(repeats, axis=None) Repeat elements of an array. Refer to numpy.repeat for full documentation. See also numpy.repeat equivalent function
numpy.reference.generated.numpy.char.chararray.repeat
numpy.char.chararray.replace method char.chararray.replace(old, new, count=None)[source] For each element in self, return a copy of the string with all occurrences of substring old replaced by new. See also char.replace
numpy.reference.generated.numpy.char.chararray.replace