doc_content
stringlengths
1
386k
doc_id
stringlengths
5
188
numpy.chararray.isalnum method chararray.isalnum()[source] Returns true for each element if all characters in the string are alphanumeric and there is at least one character, false otherwise. See also char.isalnum
numpy.reference.generated.numpy.chararray.isalnum
numpy.chararray.isalpha method chararray.isalpha()[source] Returns true for each element if all characters in the string are alphabetic and there is at least one character, false otherwise. See also char.isalpha
numpy.reference.generated.numpy.chararray.isalpha
numpy.chararray.isdecimal method chararray.isdecimal()[source] For each element in self, return True if there are only decimal characters in the element. See also char.isdecimal
numpy.reference.generated.numpy.chararray.isdecimal
numpy.chararray.isdigit method chararray.isdigit()[source] Returns true for each element if all characters in the string are digits and there is at least one character, false otherwise. See also char.isdigit
numpy.reference.generated.numpy.chararray.isdigit
numpy.chararray.islower method chararray.islower()[source] Returns true for each element if all cased characters in the string are lowercase and there is at least one cased character, false otherwise. See also char.islower
numpy.reference.generated.numpy.chararray.islower
numpy.chararray.isnumeric method chararray.isnumeric()[source] For each element in self, return True if there are only numeric characters in the element. See also char.isnumeric
numpy.reference.generated.numpy.chararray.isnumeric
numpy.chararray.isspace method chararray.isspace()[source] Returns true for each element if there are only whitespace characters in the string and there is at least one character, false otherwise. See also char.isspace
numpy.reference.generated.numpy.chararray.isspace
numpy.chararray.istitle method chararray.istitle()[source] Returns true for each element if the element is a titlecased string and there is at least one character, false otherwise. See also char.istitle
numpy.reference.generated.numpy.chararray.istitle
numpy.chararray.isupper method chararray.isupper()[source] Returns true for each element if all cased characters in the string are uppercase and there is at least one character, false otherwise. See also char.isupper
numpy.reference.generated.numpy.chararray.isupper
numpy.chararray.item method chararray.item(*args) Copy an element of an array to a standard Python scalar and return it. Parameters *argsArguments (variable number and type) none: in this case, the method only works for arrays with one element (a.size == 1), which element is copied into a standard Python scalar object and returned. int_type: this argument is interpreted as a flat index into the array, specifying which element to copy and return. tuple of int_types: functions as does a single int_type argument, except that the argument is interpreted as an nd-index into the array. Returns zStandard Python scalar object A copy of the specified element of the array as a suitable Python scalar Notes When the data type of a is longdouble or clongdouble, item() returns a scalar array object because there is no available Python scalar that would not lose information. Void arrays return a buffer object for item(), unless fields are defined, in which case a tuple is returned. item is very similar to a[args], except, instead of an array scalar, a standard Python scalar is returned. This can be useful for speeding up access to elements of the array and doing arithmetic on elements of the array using Python’s optimized math. Examples >>> np.random.seed(123) >>> x = np.random.randint(9, size=(3, 3)) >>> x array([[2, 2, 6], [1, 3, 6], [1, 0, 1]]) >>> x.item(3) 1 >>> x.item(7) 0 >>> x.item((0, 1)) 2 >>> x.item((2, 2)) 1
numpy.reference.generated.numpy.chararray.item
numpy.chararray.itemsize attribute chararray.itemsize Length of one array element in bytes. Examples >>> x = np.array([1,2,3], dtype=np.float64) >>> x.itemsize 8 >>> x = np.array([1,2,3], dtype=np.complex128) >>> x.itemsize 16
numpy.reference.generated.numpy.chararray.itemsize
numpy.chararray.join method chararray.join(seq)[source] Return a string which is the concatenation of the strings in the sequence seq. See also char.join
numpy.reference.generated.numpy.chararray.join
numpy.chararray.ljust method chararray.ljust(width, fillchar=' ')[source] Return an array with the elements of self left-justified in a string of length width. See also char.ljust
numpy.reference.generated.numpy.chararray.ljust
numpy.chararray.lower method chararray.lower()[source] Return an array with the elements of self converted to lowercase. See also char.lower
numpy.reference.generated.numpy.chararray.lower
numpy.chararray.lstrip method chararray.lstrip(chars=None)[source] For each element in self, return a copy with the leading characters removed. See also char.lstrip
numpy.reference.generated.numpy.chararray.lstrip
numpy.chararray.nbytes attribute chararray.nbytes Total bytes consumed by the elements of the array. Notes Does not include memory consumed by non-element attributes of the array object. Examples >>> x = np.zeros((3,5,2), dtype=np.complex128) >>> x.nbytes 480 >>> np.prod(x.shape) * x.itemsize 480
numpy.reference.generated.numpy.chararray.nbytes
numpy.chararray.ndim attribute chararray.ndim Number of array dimensions. Examples >>> x = np.array([1, 2, 3]) >>> x.ndim 1 >>> y = np.zeros((2, 3, 4)) >>> y.ndim 3
numpy.reference.generated.numpy.chararray.ndim
numpy.chararray.nonzero method chararray.nonzero() Return the indices of the elements that are non-zero. Refer to numpy.nonzero for full documentation. See also numpy.nonzero equivalent function
numpy.reference.generated.numpy.chararray.nonzero
numpy.chararray.put method chararray.put(indices, values, mode='raise') Set a.flat[n] = values[n] for all n in indices. Refer to numpy.put for full documentation. See also numpy.put equivalent function
numpy.reference.generated.numpy.chararray.put
numpy.chararray.ravel method chararray.ravel([order]) Return a flattened array. Refer to numpy.ravel for full documentation. See also numpy.ravel equivalent function ndarray.flat a flat iterator on the array.
numpy.reference.generated.numpy.chararray.ravel
numpy.chararray.repeat method chararray.repeat(repeats, axis=None) Repeat elements of an array. Refer to numpy.repeat for full documentation. See also numpy.repeat equivalent function
numpy.reference.generated.numpy.chararray.repeat
numpy.chararray.replace method chararray.replace(old, new, count=None)[source] For each element in self, return a copy of the string with all occurrences of substring old replaced by new. See also char.replace
numpy.reference.generated.numpy.chararray.replace
numpy.chararray.reshape method chararray.reshape(shape, order='C') Returns an array containing the same data with a new shape. Refer to numpy.reshape for full documentation. See also numpy.reshape equivalent function Notes Unlike the free function numpy.reshape, this method on ndarray allows the elements of the shape parameter to be passed in as separate arguments. For example, a.reshape(10, 11) is equivalent to a.reshape((10, 11)).
numpy.reference.generated.numpy.chararray.reshape
numpy.chararray.resize method chararray.resize(new_shape, refcheck=True) Change shape and size of array in-place. Parameters new_shapetuple of ints, or n ints Shape of resized array. refcheckbool, optional If False, reference count will not be checked. Default is True. Returns None Raises ValueError If a does not own its own data or references or views to it exist, and the data memory must be changed. PyPy only: will always raise if the data memory must be changed, since there is no reliable way to determine if references or views to it exist. SystemError If the order keyword argument is specified. This behaviour is a bug in NumPy. See also resize Return a new array with the specified shape. Notes This reallocates space for the data area if necessary. Only contiguous arrays (data elements consecutive in memory) can be resized. The purpose of the reference count check is to make sure you do not use this array as a buffer for another Python object and then reallocate the memory. However, reference counts can increase in other ways so if you are sure that you have not shared the memory for this array with another Python object, then you may safely set refcheck to False. Examples Shrinking an array: array is flattened (in the order that the data are stored in memory), resized, and reshaped: >>> a = np.array([[0, 1], [2, 3]], order='C') >>> a.resize((2, 1)) >>> a array([[0], [1]]) >>> a = np.array([[0, 1], [2, 3]], order='F') >>> a.resize((2, 1)) >>> a array([[0], [2]]) Enlarging an array: as above, but missing entries are filled with zeros: >>> b = np.array([[0, 1], [2, 3]]) >>> b.resize(2, 3) # new_shape parameter doesn't have to be a tuple >>> b array([[0, 1, 2], [3, 0, 0]]) Referencing an array prevents resizing… >>> c = a >>> a.resize((1, 1)) Traceback (most recent call last): ... ValueError: cannot resize an array that references or is referenced ... Unless refcheck is False: >>> a.resize((1, 1), refcheck=False) >>> a array([[0]]) >>> c array([[0]])
numpy.reference.generated.numpy.chararray.resize
numpy.chararray.rfind method chararray.rfind(sub, start=0, end=None)[source] For each element in self, return the highest index in the string where substring sub is found, such that sub is contained within [start, end]. See also char.rfind
numpy.reference.generated.numpy.chararray.rfind
numpy.chararray.rindex method chararray.rindex(sub, start=0, end=None)[source] Like rfind, but raises ValueError when the substring sub is not found. See also char.rindex
numpy.reference.generated.numpy.chararray.rindex
numpy.chararray.rjust method chararray.rjust(width, fillchar=' ')[source] Return an array with the elements of self right-justified in a string of length width. See also char.rjust
numpy.reference.generated.numpy.chararray.rjust
numpy.chararray.rsplit method chararray.rsplit(sep=None, maxsplit=None)[source] For each element in self, return a list of the words in the string, using sep as the delimiter string. See also char.rsplit
numpy.reference.generated.numpy.chararray.rsplit
numpy.chararray.rstrip method chararray.rstrip(chars=None)[source] For each element in self, return a copy with the trailing characters removed. See also char.rstrip
numpy.reference.generated.numpy.chararray.rstrip
numpy.chararray.searchsorted method chararray.searchsorted(v, side='left', sorter=None) Find indices where elements of v should be inserted in a to maintain order. For full documentation, see numpy.searchsorted See also numpy.searchsorted equivalent function
numpy.reference.generated.numpy.chararray.searchsorted
numpy.chararray.setfield method chararray.setfield(val, dtype, offset=0) Put a value into a specified place in a field defined by a data-type. Place val into a’s field defined by dtype and beginning offset bytes into the field. Parameters valobject Value to be placed in field. dtypedtype object Data-type of the field in which to place val. offsetint, optional The number of bytes into the field at which to place val. Returns None See also getfield Examples >>> x = np.eye(3) >>> x.getfield(np.float64) array([[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]]) >>> x.setfield(3, np.int32) >>> x.getfield(np.int32) array([[3, 3, 3], [3, 3, 3], [3, 3, 3]], dtype=int32) >>> x array([[1.0e+000, 1.5e-323, 1.5e-323], [1.5e-323, 1.0e+000, 1.5e-323], [1.5e-323, 1.5e-323, 1.0e+000]]) >>> x.setfield(np.eye(3), np.int32) >>> x array([[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]])
numpy.reference.generated.numpy.chararray.setfield
numpy.chararray.setflags method chararray.setflags(write=None, align=None, uic=None) Set array flags WRITEABLE, ALIGNED, (WRITEBACKIFCOPY and UPDATEIFCOPY), respectively. These Boolean-valued flags affect how numpy interprets the memory area used by a (see Notes below). The ALIGNED flag can only be set to True if the data is actually aligned according to the type. The WRITEBACKIFCOPY and (deprecated) UPDATEIFCOPY flags can never be set to True. The flag WRITEABLE can only be set to True if the array owns its own memory, or the ultimate owner of the memory exposes a writeable buffer interface, or is a string. (The exception for string is made so that unpickling can be done without copying memory.) Parameters writebool, optional Describes whether or not a can be written to. alignbool, optional Describes whether or not a is aligned properly for its type. uicbool, optional Describes whether or not a is a copy of another “base” array. Notes Array flags provide information about how the memory area used for the array is to be interpreted. There are 7 Boolean flags in use, only four of which can be changed by the user: WRITEBACKIFCOPY, UPDATEIFCOPY, WRITEABLE, and ALIGNED. WRITEABLE (W) the data area can be written to; ALIGNED (A) the data and strides are aligned appropriately for the hardware (as determined by the compiler); UPDATEIFCOPY (U) (deprecated), replaced by WRITEBACKIFCOPY; WRITEBACKIFCOPY (X) this array is a copy of some other array (referenced by .base). When the C-API function PyArray_ResolveWritebackIfCopy is called, the base array will be updated with the contents of this array. All flags can be accessed using the single (upper case) letter as well as the full name. Examples >>> y = np.array([[3, 1, 7], ... [2, 0, 0], ... [8, 5, 9]]) >>> y array([[3, 1, 7], [2, 0, 0], [8, 5, 9]]) >>> y.flags C_CONTIGUOUS : True F_CONTIGUOUS : False OWNDATA : True WRITEABLE : True ALIGNED : True WRITEBACKIFCOPY : False UPDATEIFCOPY : False >>> y.setflags(write=0, align=0) >>> y.flags C_CONTIGUOUS : True F_CONTIGUOUS : False OWNDATA : True WRITEABLE : False ALIGNED : False WRITEBACKIFCOPY : False UPDATEIFCOPY : False >>> y.setflags(uic=1) Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: cannot set WRITEBACKIFCOPY flag to True
numpy.reference.generated.numpy.chararray.setflags
numpy.chararray.size attribute chararray.size Number of elements in the array. Equal to np.prod(a.shape), i.e., the product of the array’s dimensions. Notes a.size returns a standard arbitrary precision Python integer. This may not be the case with other methods of obtaining the same value (like the suggested np.prod(a.shape), which returns an instance of np.int_), and may be relevant if the value is used further in calculations that may overflow a fixed size integer type. Examples >>> x = np.zeros((3, 5, 2), dtype=np.complex128) >>> x.size 30 >>> np.prod(x.shape) 30
numpy.reference.generated.numpy.chararray.size
numpy.chararray.sort method chararray.sort(axis=- 1, kind=None, order=None) Sort an array in-place. Refer to numpy.sort for full documentation. Parameters axisint, optional Axis along which to sort. Default is -1, which means sort along the last axis. kind{‘quicksort’, ‘mergesort’, ‘heapsort’, ‘stable’}, optional Sorting algorithm. The default is ‘quicksort’. Note that both ‘stable’ and ‘mergesort’ use timsort under the covers and, in general, the actual implementation will vary with datatype. The ‘mergesort’ option is retained for backwards compatibility. Changed in version 1.15.0: The ‘stable’ option was added. orderstr or list of str, optional When a is an array with fields defined, this argument specifies which fields to compare first, second, etc. A single field can be specified as a string, and not all fields need be specified, but unspecified fields will still be used, in the order in which they come up in the dtype, to break ties. See also numpy.sort Return a sorted copy of an array. numpy.argsort Indirect sort. numpy.lexsort Indirect stable sort on multiple keys. numpy.searchsorted Find elements in sorted array. numpy.partition Partial sort. Notes See numpy.sort for notes on the different sorting algorithms. Examples >>> a = np.array([[1,4], [3,1]]) >>> a.sort(axis=1) >>> a array([[1, 4], [1, 3]]) >>> a.sort(axis=0) >>> a array([[1, 3], [1, 4]]) Use the order keyword to specify a field to use when sorting a structured array: >>> a = np.array([('a', 2), ('c', 1)], dtype=[('x', 'S1'), ('y', int)]) >>> a.sort(order='y') >>> a array([(b'c', 1), (b'a', 2)], dtype=[('x', 'S1'), ('y', '<i8')])
numpy.reference.generated.numpy.chararray.sort
numpy.chararray.split method chararray.split(sep=None, maxsplit=None)[source] For each element in self, return a list of the words in the string, using sep as the delimiter string. See also char.split
numpy.reference.generated.numpy.chararray.split
numpy.chararray.splitlines method chararray.splitlines(keepends=None)[source] For each element in self, return a list of the lines in the element, breaking at line boundaries. See also char.splitlines
numpy.reference.generated.numpy.chararray.splitlines
numpy.chararray.squeeze method chararray.squeeze(axis=None) Remove axes of length one from a. Refer to numpy.squeeze for full documentation. See also numpy.squeeze equivalent function
numpy.reference.generated.numpy.chararray.squeeze
numpy.chararray.startswith method chararray.startswith(prefix, start=0, end=None)[source] Returns a boolean array which is True where the string element in self starts with prefix, otherwise False. See also char.startswith
numpy.reference.generated.numpy.chararray.startswith
numpy.chararray.strides attribute chararray.strides Tuple of bytes to step in each dimension when traversing an array. The byte offset of element (i[0], i[1], ..., i[n]) in an array a is: offset = sum(np.array(i) * a.strides) A more detailed explanation of strides can be found in the “ndarray.rst” file in the NumPy reference guide. See also numpy.lib.stride_tricks.as_strided Notes Imagine an array of 32-bit integers (each 4 bytes): x = np.array([[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]], dtype=np.int32) This array is stored in memory as 40 bytes, one after the other (known as a contiguous block of memory). The strides of an array tell us how many bytes we have to skip in memory to move to the next position along a certain axis. For example, we have to skip 4 bytes (1 value) to move to the next column, but 20 bytes (5 values) to get to the same position in the next row. As such, the strides for the array x will be (20, 4). Examples >>> y = np.reshape(np.arange(2*3*4), (2,3,4)) >>> y array([[[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]], [[12, 13, 14, 15], [16, 17, 18, 19], [20, 21, 22, 23]]]) >>> y.strides (48, 16, 4) >>> y[1,1,1] 17 >>> offset=sum(y.strides * np.array((1,1,1))) >>> offset/y.itemsize 17 >>> x = np.reshape(np.arange(5*6*7*8), (5,6,7,8)).transpose(2,3,1,0) >>> x.strides (32, 4, 224, 1344) >>> i = np.array([3,5,2,2]) >>> offset = sum(i * x.strides) >>> x[3,5,2,2] 813 >>> offset / x.itemsize 813
numpy.reference.generated.numpy.chararray.strides
numpy.chararray.strip method chararray.strip(chars=None)[source] For each element in self, return a copy with the leading and trailing characters removed. See also char.strip
numpy.reference.generated.numpy.chararray.strip
numpy.chararray.swapaxes method chararray.swapaxes(axis1, axis2) Return a view of the array with axis1 and axis2 interchanged. Refer to numpy.swapaxes for full documentation. See also numpy.swapaxes equivalent function
numpy.reference.generated.numpy.chararray.swapaxes
numpy.chararray.swapcase method chararray.swapcase()[source] For each element in self, return a copy of the string with uppercase characters converted to lowercase and vice versa. See also char.swapcase
numpy.reference.generated.numpy.chararray.swapcase
numpy.chararray.T attribute chararray.T The transposed array. Same as self.transpose(). See also transpose Examples >>> x = np.array([[1.,2.],[3.,4.]]) >>> x array([[ 1., 2.], [ 3., 4.]]) >>> x.T array([[ 1., 3.], [ 2., 4.]]) >>> x = np.array([1.,2.,3.,4.]) >>> x array([ 1., 2., 3., 4.]) >>> x.T array([ 1., 2., 3., 4.])
numpy.reference.generated.numpy.chararray.t
numpy.chararray.take method chararray.take(indices, axis=None, out=None, mode='raise') Return an array formed from the elements of a at the given indices. Refer to numpy.take for full documentation. See also numpy.take equivalent function
numpy.reference.generated.numpy.chararray.take
numpy.chararray.title method chararray.title()[source] For each element in self, return a titlecased version of the string: words start with uppercase characters, all remaining cased characters are lowercase. See also char.title
numpy.reference.generated.numpy.chararray.title
numpy.chararray.tobytes method chararray.tobytes(order='C') Construct Python bytes containing the raw data bytes in the array. Constructs Python bytes showing a copy of the raw contents of data memory. The bytes object is produced in C-order by default. This behavior is controlled by the order parameter. New in version 1.9.0. Parameters order{‘C’, ‘F’, ‘A’}, optional Controls the memory layout of the bytes object. ‘C’ means C-order, ‘F’ means F-order, ‘A’ (short for Any) means ‘F’ if a is Fortran contiguous, ‘C’ otherwise. Default is ‘C’. Returns sbytes Python bytes exhibiting a copy of a’s raw data. Examples >>> x = np.array([[0, 1], [2, 3]], dtype='<u2') >>> x.tobytes() b'\x00\x00\x01\x00\x02\x00\x03\x00' >>> x.tobytes('C') == x.tobytes() True >>> x.tobytes('F') b'\x00\x00\x02\x00\x01\x00\x03\x00'
numpy.reference.generated.numpy.chararray.tobytes
numpy.chararray.tofile method chararray.tofile(fid, sep='', format='%s') Write array to a file as text or binary (default). Data is always written in ‘C’ order, independent of the order of a. The data produced by this method can be recovered using the function fromfile(). Parameters fidfile or str or Path An open file object, or a string containing a filename. Changed in version 1.17.0: pathlib.Path objects are now accepted. sepstr Separator between array items for text output. If “” (empty), a binary file is written, equivalent to file.write(a.tobytes()). formatstr Format string for text file output. Each entry in the array is formatted to text by first converting it to the closest Python type, and then using “format” % item. Notes This is a convenience function for quick storage of array data. Information on endianness and precision is lost, so this method is not a good choice for files intended to archive data or transport data between machines with different endianness. Some of these problems can be overcome by outputting the data as text files, at the expense of speed and file size. When fid is a file object, array contents are directly written to the file, bypassing the file object’s write method. As a result, tofile cannot be used with files objects supporting compression (e.g., GzipFile) or file-like objects that do not support fileno() (e.g., BytesIO).
numpy.reference.generated.numpy.chararray.tofile
numpy.chararray.tolist method chararray.tolist() Return the array as an a.ndim-levels deep nested list of Python scalars. Return a copy of the array data as a (nested) Python list. Data items are converted to the nearest compatible builtin Python type, via the item function. If a.ndim is 0, then since the depth of the nested list is 0, it will not be a list at all, but a simple Python scalar. Parameters none Returns yobject, or list of object, or list of list of object, or … The possibly nested list of array elements. Notes The array may be recreated via a = np.array(a.tolist()), although this may sometimes lose precision. Examples For a 1D array, a.tolist() is almost the same as list(a), except that tolist changes numpy scalars to Python scalars: >>> a = np.uint32([1, 2]) >>> a_list = list(a) >>> a_list [1, 2] >>> type(a_list[0]) <class 'numpy.uint32'> >>> a_tolist = a.tolist() >>> a_tolist [1, 2] >>> type(a_tolist[0]) <class 'int'> Additionally, for a 2D array, tolist applies recursively: >>> a = np.array([[1, 2], [3, 4]]) >>> list(a) [array([1, 2]), array([3, 4])] >>> a.tolist() [[1, 2], [3, 4]] The base case for this recursion is a 0D array: >>> a = np.array(1) >>> list(a) Traceback (most recent call last): ... TypeError: iteration over a 0-d array >>> a.tolist() 1
numpy.reference.generated.numpy.chararray.tolist
numpy.chararray.tostring method chararray.tostring(order='C') A compatibility alias for tobytes, with exactly the same behavior. Despite its name, it returns bytes not strs. Deprecated since version 1.19.0.
numpy.reference.generated.numpy.chararray.tostring
numpy.chararray.translate method chararray.translate(table, deletechars=None)[source] For each element in self, return a copy of the string where all characters occurring in the optional argument deletechars are removed, and the remaining characters have been mapped through the given translation table. See also char.translate
numpy.reference.generated.numpy.chararray.translate
numpy.chararray.transpose method chararray.transpose(*axes) Returns a view of the array with axes transposed. For a 1-D array this has no effect, as a transposed vector is simply the same vector. To convert a 1-D array into a 2D column vector, an additional dimension must be added. np.atleast2d(a).T achieves this, as does a[:, np.newaxis]. For a 2-D array, this is a standard matrix transpose. For an n-D array, if axes are given, their order indicates how the axes are permuted (see Examples). If axes are not provided and a.shape = (i[0], i[1], ... i[n-2], i[n-1]), then a.transpose().shape = (i[n-1], i[n-2], ... i[1], i[0]). Parameters axesNone, tuple of ints, or n ints None or no argument: reverses the order of the axes. tuple of ints: i in the j-th place in the tuple means a’s i-th axis becomes a.transpose()’s j-th axis. n ints: same as an n-tuple of the same ints (this form is intended simply as a “convenience” alternative to the tuple form) Returns outndarray View of a, with axes suitably permuted. See also transpose Equivalent function ndarray.T Array property returning the array transposed. ndarray.reshape Give a new shape to an array without changing its data. Examples >>> a = np.array([[1, 2], [3, 4]]) >>> a array([[1, 2], [3, 4]]) >>> a.transpose() array([[1, 3], [2, 4]]) >>> a.transpose((1, 0)) array([[1, 3], [2, 4]]) >>> a.transpose(1, 0) array([[1, 3], [2, 4]])
numpy.reference.generated.numpy.chararray.transpose
numpy.chararray.upper method chararray.upper()[source] Return an array with the elements of self converted to uppercase. See also char.upper
numpy.reference.generated.numpy.chararray.upper
numpy.chararray.view method chararray.view([dtype][, type]) New view of array with the same data. Note Passing None for dtype is different from omitting the parameter, since the former invokes dtype(None) which is an alias for dtype('float_'). Parameters dtypedata-type or ndarray sub-class, optional Data-type descriptor of the returned view, e.g., float32 or int16. Omitting it results in the view having the same data-type as a. This argument can also be specified as an ndarray sub-class, which then specifies the type of the returned object (this is equivalent to setting the type parameter). typePython type, optional Type of the returned view, e.g., ndarray or matrix. Again, omission of the parameter results in type preservation. Notes a.view() is used two different ways: a.view(some_dtype) or a.view(dtype=some_dtype) constructs a view of the array’s memory with a different data-type. This can cause a reinterpretation of the bytes of memory. a.view(ndarray_subclass) or a.view(type=ndarray_subclass) just returns an instance of ndarray_subclass that looks at the same array (same shape, dtype, etc.) This does not cause a reinterpretation of the memory. For a.view(some_dtype), if some_dtype has a different number of bytes per entry than the previous dtype (for example, converting a regular array to a structured array), then the behavior of the view cannot be predicted just from the superficial appearance of a (shown by print(a)). It also depends on exactly how a is stored in memory. Therefore if a is C-ordered versus fortran-ordered, versus defined as a slice or transpose, etc., the view may give different results. Examples >>> x = np.array([(1, 2)], dtype=[('a', np.int8), ('b', np.int8)]) Viewing array data using a different type and dtype: >>> y = x.view(dtype=np.int16, type=np.matrix) >>> y matrix([[513]], dtype=int16) >>> print(type(y)) <class 'numpy.matrix'> Creating a view on a structured array so it can be used in calculations >>> x = np.array([(1, 2),(3,4)], dtype=[('a', np.int8), ('b', np.int8)]) >>> xv = x.view(dtype=np.int8).reshape(-1,2) >>> xv array([[1, 2], [3, 4]], dtype=int8) >>> xv.mean(0) array([2., 3.]) Making changes to the view changes the underlying array >>> xv[0,1] = 20 >>> x array([(1, 20), (3, 4)], dtype=[('a', 'i1'), ('b', 'i1')]) Using a view to convert an array to a recarray: >>> z = x.view(np.recarray) >>> z.a array([1, 3], dtype=int8) Views share data: >>> x[0] = (9, 10) >>> z[0] (9, 10) Views that change the dtype size (bytes per entry) should normally be avoided on arrays defined by slices, transposes, fortran-ordering, etc.: >>> x = np.array([[1,2,3],[4,5,6]], dtype=np.int16) >>> y = x[:, 0:2] >>> y array([[1, 2], [4, 5]], dtype=int16) >>> y.view(dtype=[('width', np.int16), ('length', np.int16)]) Traceback (most recent call last): ... ValueError: To change to a dtype of a different size, the array must be C-contiguous >>> z = y.copy() >>> z.view(dtype=[('width', np.int16), ('length', np.int16)]) array([[(1, 2)], [(4, 5)]], dtype=[('width', '<i2'), ('length', '<i2')])
numpy.reference.generated.numpy.chararray.view
numpy.chararray.zfill method chararray.zfill(width)[source] Return the numeric string left-filled with zeros in a string of length width. See also char.zfill
numpy.reference.generated.numpy.chararray.zfill
class.__array__([dtype]) If a class (ndarray subclass or not) having the __array__ method is used as the output object of an ufunc, results will not be written to the object returned by __array__. This practice will return TypeError.
numpy.reference.arrays.classes#numpy.class.__array__
class.__array_finalize__(obj) This method is called whenever the system internally allocates a new array from obj, where obj is a subclass (subtype) of the ndarray. It can be used to change attributes of self after construction (so as to ensure a 2-d matrix for example), or to update meta-information from the “parent.” Subclasses inherit a default implementation of this method that does nothing.
numpy.reference.arrays.classes#numpy.class.__array_finalize__
class.__array_function__(func, types, args, kwargs) New in version 1.16. Note In NumPy 1.17, the protocol is enabled by default, but can be disabled with NUMPY_EXPERIMENTAL_ARRAY_FUNCTION=0. In NumPy 1.16, you need to set the environment variable NUMPY_EXPERIMENTAL_ARRAY_FUNCTION=1 before importing NumPy to use NumPy function overrides. Eventually, expect to __array_function__ to always be enabled. func is an arbitrary callable exposed by NumPy’s public API, which was called in the form func(*args, **kwargs). types is a collection collections.abc.Collection of unique argument types from the original NumPy function call that implement __array_function__. The tuple args and dict kwargs are directly passed on from the original call. As a convenience for __array_function__ implementors, types provides all argument types with an '__array_function__' attribute. This allows implementors to quickly identify cases where they should defer to __array_function__ implementations on other arguments. Implementations should not rely on the iteration order of types. Most implementations of __array_function__ will start with two checks: Is the given function something that we know how to overload? Are all arguments of a type that we know how to handle? If these conditions hold, __array_function__ should return the result from calling its implementation for func(*args, **kwargs). Otherwise, it should return the sentinel value NotImplemented, indicating that the function is not implemented by these types. There are no general requirements on the return value from __array_function__, although most sensible implementations should probably return array(s) with the same type as one of the function’s arguments. It may also be convenient to define a custom decorators (implements below) for registering __array_function__ implementations. HANDLED_FUNCTIONS = {} class MyArray: def __array_function__(self, func, types, args, kwargs): if func not in HANDLED_FUNCTIONS: return NotImplemented # Note: this allows subclasses that don't override # __array_function__ to handle MyArray objects if not all(issubclass(t, MyArray) for t in types): return NotImplemented return HANDLED_FUNCTIONS[func](*args, **kwargs) def implements(numpy_function): """Register an __array_function__ implementation for MyArray objects.""" def decorator(func): HANDLED_FUNCTIONS[numpy_function] = func return func return decorator @implements(np.concatenate) def concatenate(arrays, axis=0, out=None): ... # implementation of concatenate for MyArray objects @implements(np.broadcast_to) def broadcast_to(array, shape): ... # implementation of broadcast_to for MyArray objects Note that it is not required for __array_function__ implementations to include all of the corresponding NumPy function’s optional arguments (e.g., broadcast_to above omits the irrelevant subok argument). Optional arguments are only passed in to __array_function__ if they were explicitly used in the NumPy function call. Just like the case for builtin special methods like __add__, properly written __array_function__ methods should always return NotImplemented when an unknown type is encountered. Otherwise, it will be impossible to correctly override NumPy functions from another object if the operation also includes one of your objects. For the most part, the rules for dispatch with __array_function__ match those for __array_ufunc__. In particular: NumPy will gather implementations of __array_function__ from all specified inputs and call them in order: subclasses before superclasses, and otherwise left to right. Note that in some edge cases involving subclasses, this differs slightly from the current behavior of Python. Implementations of __array_function__ indicate that they can handle the operation by returning any value other than NotImplemented. If all __array_function__ methods return NotImplemented, NumPy will raise TypeError. If no __array_function__ methods exists, NumPy will default to calling its own implementation, intended for use on NumPy arrays. This case arises, for example, when all array-like arguments are Python numbers or lists. (NumPy arrays do have a __array_function__ method, given below, but it always returns NotImplemented if any argument other than a NumPy array subclass implements __array_function__.) One deviation from the current behavior of __array_ufunc__ is that NumPy will only call __array_function__ on the first argument of each unique type. This matches Python’s rule for calling reflected methods, and this ensures that checking overloads has acceptable performance even when there are a large number of overloaded arguments.
numpy.reference.arrays.classes#numpy.class.__array_function__
class.__array_prepare__(array, context=None) At the beginning of every ufunc, this method is called on the input object with the highest array priority, or the output object if one was specified. The output array is passed in and whatever is returned is passed to the ufunc. Subclasses inherit a default implementation of this method which simply returns the output array unmodified. Subclasses may opt to use this method to transform the output array into an instance of the subclass and update metadata before returning the array to the ufunc for computation. Note For ufuncs, it is hoped to eventually deprecate this method in favour of __array_ufunc__.
numpy.reference.arrays.classes#numpy.class.__array_prepare__
class.__array_priority__ The value of this attribute is used to determine what type of object to return in situations where there is more than one possibility for the Python type of the returned object. Subclasses inherit a default value of 0.0 for this attribute. Note For ufuncs, it is hoped to eventually deprecate this method in favour of __array_ufunc__.
numpy.reference.arrays.classes#numpy.class.__array_priority__
Standard array subclasses Note Subclassing a numpy.ndarray is possible but if your goal is to create an array with modified behavior, as do dask arrays for distributed computation and cupy arrays for GPU-based computation, subclassing is discouraged. Instead, using numpy’s dispatch mechanism is recommended. The ndarray can be inherited from (in Python or in C) if desired. Therefore, it can form a foundation for many useful classes. Often whether to sub-class the array object or to simply use the core array component as an internal part of a new class is a difficult decision, and can be simply a matter of choice. NumPy has several tools for simplifying how your new object interacts with other array objects, and so the choice may not be significant in the end. One way to simplify the question is by asking yourself if the object you are interested in can be replaced as a single array or does it really require two or more arrays at its core. Note that asarray always returns the base-class ndarray. If you are confident that your use of the array object can handle any subclass of an ndarray, then asanyarray can be used to allow subclasses to propagate more cleanly through your subroutine. In principal a subclass could redefine any aspect of the array and therefore, under strict guidelines, asanyarray would rarely be useful. However, most subclasses of the array object will not redefine certain aspects of the array object such as the buffer interface, or the attributes of the array. One important example, however, of why your subroutine may not be able to handle an arbitrary subclass of an array is that matrices redefine the “*” operator to be matrix-multiplication, rather than element-by-element multiplication. Special attributes and methods See also Subclassing ndarray NumPy provides several hooks that classes can customize: class.__array_ufunc__(ufunc, method, *inputs, **kwargs) New in version 1.13. Any class, ndarray subclass or not, can define this method or set it to None in order to override the behavior of NumPy’s ufuncs. This works quite similarly to Python’s __mul__ and other binary operation routines. ufunc is the ufunc object that was called. method is a string indicating which Ufunc method was called (one of "__call__", "reduce", "reduceat", "accumulate", "outer", "inner"). inputs is a tuple of the input arguments to the ufunc. kwargs is a dictionary containing the optional input arguments of the ufunc. If given, any out arguments, both positional and keyword, are passed as a tuple in kwargs. See the discussion in Universal functions (ufunc) for details. The method should return either the result of the operation, or NotImplemented if the operation requested is not implemented. If one of the input or output arguments has a __array_ufunc__ method, it is executed instead of the ufunc. If more than one of the arguments implements __array_ufunc__, they are tried in the order: subclasses before superclasses, inputs before outputs, otherwise left to right. The first routine returning something other than NotImplemented determines the result. If all of the __array_ufunc__ operations return NotImplemented, a TypeError is raised. Note We intend to re-implement numpy functions as (generalized) Ufunc, in which case it will become possible for them to be overridden by the __array_ufunc__ method. A prime candidate is matmul, which currently is not a Ufunc, but could be relatively easily be rewritten as a (set of) generalized Ufuncs. The same may happen with functions such as median, amin, and argsort. Like with some other special methods in python, such as __hash__ and __iter__, it is possible to indicate that your class does not support ufuncs by setting __array_ufunc__ = None. Ufuncs always raise TypeError when called on an object that sets __array_ufunc__ = None. The presence of __array_ufunc__ also influences how ndarray handles binary operations like arr + obj and arr < obj when arr is an ndarray and obj is an instance of a custom class. There are two possibilities. If obj.__array_ufunc__ is present and not None, then ndarray.__add__ and friends will delegate to the ufunc machinery, meaning that arr + obj becomes np.add(arr, obj), and then add invokes obj.__array_ufunc__. This is useful if you want to define an object that acts like an array. Alternatively, if obj.__array_ufunc__ is set to None, then as a special case, special methods like ndarray.__add__ will notice this and unconditionally raise TypeError. This is useful if you want to create objects that interact with arrays via binary operations, but are not themselves arrays. For example, a units handling system might have an object m representing the “meters” unit, and want to support the syntax arr * m to represent that the array has units of “meters”, but not want to otherwise interact with arrays via ufuncs or otherwise. This can be done by setting __array_ufunc__ = None and defining __mul__ and __rmul__ methods. (Note that this means that writing an __array_ufunc__ that always returns NotImplemented is not quite the same as setting __array_ufunc__ = None: in the former case, arr + obj will raise TypeError, while in the latter case it is possible to define a __radd__ method to prevent this.) The above does not hold for in-place operators, for which ndarray never returns NotImplemented. Hence, arr += obj would always lead to a TypeError. This is because for arrays in-place operations cannot generically be replaced by a simple reverse operation. (For instance, by default, arr += obj would be translated to arr = arr + obj, i.e., arr would be replaced, contrary to what is expected for in-place array operations.) Note If you define __array_ufunc__: If you are not a subclass of ndarray, we recommend your class define special methods like __add__ and __lt__ that delegate to ufuncs just like ndarray does. An easy way to do this is to subclass from NDArrayOperatorsMixin. If you subclass ndarray, we recommend that you put all your override logic in __array_ufunc__ and not also override special methods. This ensures the class hierarchy is determined in only one place rather than separately by the ufunc machinery and by the binary operation rules (which gives preference to special methods of subclasses; the alternative way to enforce a one-place only hierarchy, of setting __array_ufunc__ to None, would seem very unexpected and thus confusing, as then the subclass would not work at all with ufuncs). ndarray defines its own __array_ufunc__, which, evaluates the ufunc if no arguments have overrides, and returns NotImplemented otherwise. This may be useful for subclasses for which __array_ufunc__ converts any instances of its own class to ndarray: it can then pass these on to its superclass using super().__array_ufunc__(*inputs, **kwargs), and finally return the results after possible back-conversion. The advantage of this practice is that it ensures that it is possible to have a hierarchy of subclasses that extend the behaviour. See Subclassing ndarray for details. Note If a class defines the __array_ufunc__ method, this disables the __array_wrap__, __array_prepare__, __array_priority__ mechanism described below for ufuncs (which may eventually be deprecated). class.__array_function__(func, types, args, kwargs) New in version 1.16. Note In NumPy 1.17, the protocol is enabled by default, but can be disabled with NUMPY_EXPERIMENTAL_ARRAY_FUNCTION=0. In NumPy 1.16, you need to set the environment variable NUMPY_EXPERIMENTAL_ARRAY_FUNCTION=1 before importing NumPy to use NumPy function overrides. Eventually, expect to __array_function__ to always be enabled. func is an arbitrary callable exposed by NumPy’s public API, which was called in the form func(*args, **kwargs). types is a collection collections.abc.Collection of unique argument types from the original NumPy function call that implement __array_function__. The tuple args and dict kwargs are directly passed on from the original call. As a convenience for __array_function__ implementors, types provides all argument types with an '__array_function__' attribute. This allows implementors to quickly identify cases where they should defer to __array_function__ implementations on other arguments. Implementations should not rely on the iteration order of types. Most implementations of __array_function__ will start with two checks: Is the given function something that we know how to overload? Are all arguments of a type that we know how to handle? If these conditions hold, __array_function__ should return the result from calling its implementation for func(*args, **kwargs). Otherwise, it should return the sentinel value NotImplemented, indicating that the function is not implemented by these types. There are no general requirements on the return value from __array_function__, although most sensible implementations should probably return array(s) with the same type as one of the function’s arguments. It may also be convenient to define a custom decorators (implements below) for registering __array_function__ implementations. HANDLED_FUNCTIONS = {} class MyArray: def __array_function__(self, func, types, args, kwargs): if func not in HANDLED_FUNCTIONS: return NotImplemented # Note: this allows subclasses that don't override # __array_function__ to handle MyArray objects if not all(issubclass(t, MyArray) for t in types): return NotImplemented return HANDLED_FUNCTIONS[func](*args, **kwargs) def implements(numpy_function): """Register an __array_function__ implementation for MyArray objects.""" def decorator(func): HANDLED_FUNCTIONS[numpy_function] = func return func return decorator @implements(np.concatenate) def concatenate(arrays, axis=0, out=None): ... # implementation of concatenate for MyArray objects @implements(np.broadcast_to) def broadcast_to(array, shape): ... # implementation of broadcast_to for MyArray objects Note that it is not required for __array_function__ implementations to include all of the corresponding NumPy function’s optional arguments (e.g., broadcast_to above omits the irrelevant subok argument). Optional arguments are only passed in to __array_function__ if they were explicitly used in the NumPy function call. Just like the case for builtin special methods like __add__, properly written __array_function__ methods should always return NotImplemented when an unknown type is encountered. Otherwise, it will be impossible to correctly override NumPy functions from another object if the operation also includes one of your objects. For the most part, the rules for dispatch with __array_function__ match those for __array_ufunc__. In particular: NumPy will gather implementations of __array_function__ from all specified inputs and call them in order: subclasses before superclasses, and otherwise left to right. Note that in some edge cases involving subclasses, this differs slightly from the current behavior of Python. Implementations of __array_function__ indicate that they can handle the operation by returning any value other than NotImplemented. If all __array_function__ methods return NotImplemented, NumPy will raise TypeError. If no __array_function__ methods exists, NumPy will default to calling its own implementation, intended for use on NumPy arrays. This case arises, for example, when all array-like arguments are Python numbers or lists. (NumPy arrays do have a __array_function__ method, given below, but it always returns NotImplemented if any argument other than a NumPy array subclass implements __array_function__.) One deviation from the current behavior of __array_ufunc__ is that NumPy will only call __array_function__ on the first argument of each unique type. This matches Python’s rule for calling reflected methods, and this ensures that checking overloads has acceptable performance even when there are a large number of overloaded arguments. class.__array_finalize__(obj) This method is called whenever the system internally allocates a new array from obj, where obj is a subclass (subtype) of the ndarray. It can be used to change attributes of self after construction (so as to ensure a 2-d matrix for example), or to update meta-information from the “parent.” Subclasses inherit a default implementation of this method that does nothing. class.__array_prepare__(array, context=None) At the beginning of every ufunc, this method is called on the input object with the highest array priority, or the output object if one was specified. The output array is passed in and whatever is returned is passed to the ufunc. Subclasses inherit a default implementation of this method which simply returns the output array unmodified. Subclasses may opt to use this method to transform the output array into an instance of the subclass and update metadata before returning the array to the ufunc for computation. Note For ufuncs, it is hoped to eventually deprecate this method in favour of __array_ufunc__. class.__array_wrap__(array, context=None) At the end of every ufunc, this method is called on the input object with the highest array priority, or the output object if one was specified. The ufunc-computed array is passed in and whatever is returned is passed to the user. Subclasses inherit a default implementation of this method, which transforms the array into a new instance of the object’s class. Subclasses may opt to use this method to transform the output array into an instance of the subclass and update metadata before returning the array to the user. Note For ufuncs, it is hoped to eventually deprecate this method in favour of __array_ufunc__. class.__array_priority__ The value of this attribute is used to determine what type of object to return in situations where there is more than one possibility for the Python type of the returned object. Subclasses inherit a default value of 0.0 for this attribute. Note For ufuncs, it is hoped to eventually deprecate this method in favour of __array_ufunc__. class.__array__([dtype]) If a class (ndarray subclass or not) having the __array__ method is used as the output object of an ufunc, results will not be written to the object returned by __array__. This practice will return TypeError. Matrix objects Note It is strongly advised not to use the matrix subclass. As described below, it makes writing functions that deal consistently with matrices and regular arrays very difficult. Currently, they are mainly used for interacting with scipy.sparse. We hope to provide an alternative for this use, however, and eventually remove the matrix subclass. matrix objects inherit from the ndarray and therefore, they have the same attributes and methods of ndarrays. There are six important differences of matrix objects, however, that may lead to unexpected results when you use matrices but expect them to act like arrays: Matrix objects can be created using a string notation to allow Matlab-style syntax where spaces separate columns and semicolons (‘;’) separate rows. Matrix objects are always two-dimensional. This has far-reaching implications, in that m.ravel() is still two-dimensional (with a 1 in the first dimension) and item selection returns two-dimensional objects so that sequence behavior is fundamentally different than arrays. Matrix objects over-ride multiplication to be matrix-multiplication. Make sure you understand this for functions that you may want to receive matrices. Especially in light of the fact that asanyarray(m) returns a matrix when m is a matrix. Matrix objects over-ride power to be matrix raised to a power. The same warning about using power inside a function that uses asanyarray(…) to get an array object holds for this fact. The default __array_priority__ of matrix objects is 10.0, and therefore mixed operations with ndarrays always produce matrices. Matrices have special attributes which make calculations easier. These are matrix.T Returns the transpose of the matrix. matrix.H Returns the (complex) conjugate transpose of self. matrix.I Returns the (multiplicative) inverse of invertible self. matrix.A Return self as an ndarray object. Warning Matrix objects over-ride multiplication, ‘*’, and power, ‘**’, to be matrix-multiplication and matrix power, respectively. If your subroutine can accept sub-classes and you do not convert to base- class arrays, then you must use the ufuncs multiply and power to be sure that you are performing the correct operation for all inputs. The matrix class is a Python subclass of the ndarray and can be used as a reference for how to construct your own subclass of the ndarray. Matrices can be created from other matrices, strings, and anything else that can be converted to an ndarray . The name “mat “is an alias for “matrix “in NumPy. matrix(data[, dtype, copy]) Note It is no longer recommended to use this class, even for linear asmatrix(data[, dtype]) Interpret the input as a matrix. bmat(obj[, ldict, gdict]) Build a matrix object from a string, nested sequence, or array. Example 1: Matrix creation from a string >>> a = np.mat('1 2 3; 4 5 3') >>> print((a*a.T).I) [[ 0.29239766 -0.13450292] [-0.13450292 0.08187135]] Example 2: Matrix creation from nested sequence >>> np.mat([[1,5,10],[1.0,3,4j]]) matrix([[ 1.+0.j, 5.+0.j, 10.+0.j], [ 1.+0.j, 3.+0.j, 0.+4.j]]) Example 3: Matrix creation from an array >>> np.mat(np.random.rand(3,3)).T matrix([[4.17022005e-01, 3.02332573e-01, 1.86260211e-01], [7.20324493e-01, 1.46755891e-01, 3.45560727e-01], [1.14374817e-04, 9.23385948e-02, 3.96767474e-01]]) Memory-mapped file arrays Memory-mapped files are useful for reading and/or modifying small segments of a large file with regular layout, without reading the entire file into memory. A simple subclass of the ndarray uses a memory-mapped file for the data buffer of the array. For small files, the over-head of reading the entire file into memory is typically not significant, however for large files using memory mapping can save considerable resources. Memory-mapped-file arrays have one additional method (besides those they inherit from the ndarray): .flush() which must be called manually by the user to ensure that any changes to the array actually get written to disk. memmap(filename[, dtype, mode, offset, ...]) Create a memory-map to an array stored in a binary file on disk. memmap.flush() Write any changes in the array to the file on disk. Example: >>> a = np.memmap('newfile.dat', dtype=float, mode='w+', shape=1000) >>> a[10] = 10.0 >>> a[30] = 30.0 >>> del a >>> b = np.fromfile('newfile.dat', dtype=float) >>> print(b[10], b[30]) 10.0 30.0 >>> a = np.memmap('newfile.dat', dtype=float) >>> print(a[10], a[30]) 10.0 30.0 Character arrays (numpy.char) See also Creating character arrays (numpy.char) Note The chararray class exists for backwards compatibility with Numarray, it is not recommended for new development. Starting from numpy 1.4, if one needs arrays of strings, it is recommended to use arrays of dtype object_, bytes_ or str_, and use the free functions in the numpy.char module for fast vectorized string operations. These are enhanced arrays of either str_ type or bytes_ type. These arrays inherit from the ndarray, but specially-define the operations +, *, and % on a (broadcasting) element-by-element basis. These operations are not available on the standard ndarray of character type. In addition, the chararray has all of the standard str (and bytes) methods, executing them on an element-by-element basis. Perhaps the easiest way to create a chararray is to use self.view(chararray) where self is an ndarray of str or unicode data-type. However, a chararray can also be created using the numpy.chararray constructor, or via the numpy.char.array function: chararray(shape[, itemsize, unicode, ...]) Provides a convenient view on arrays of string and unicode values. core.defchararray.array(obj[, itemsize, ...]) Create a chararray. Another difference with the standard ndarray of str data-type is that the chararray inherits the feature introduced by Numarray that white-space at the end of any element in the array will be ignored on item retrieval and comparison operations. Record arrays (numpy.rec) See also Creating record arrays (numpy.rec), Data type routines, Data type objects (dtype). NumPy provides the recarray class which allows accessing the fields of a structured array as attributes, and a corresponding scalar data type object record. recarray(shape[, dtype, buf, offset, ...]) Construct an ndarray that allows field access using attributes. record A data-type scalar that allows field access as attribute lookup. Masked arrays (numpy.ma) See also Masked arrays Standard container class For backward compatibility and as a standard “container “class, the UserArray from Numeric has been brought over to NumPy and named numpy.lib.user_array.container The container class is a Python class whose self.array attribute is an ndarray. Multiple inheritance is probably easier with numpy.lib.user_array.container than with the ndarray itself and so it is included by default. It is not documented here beyond mentioning its existence because you are encouraged to use the ndarray class directly if you can. numpy.lib.user_array.container(data[, ...]) Standard container-class for easy multiple-inheritance. Array Iterators Iterators are a powerful concept for array processing. Essentially, iterators implement a generalized for-loop. If myiter is an iterator object, then the Python code: for val in myiter: ... some code involving val ... calls val = next(myiter) repeatedly until StopIteration is raised by the iterator. There are several ways to iterate over an array that may be useful: default iteration, flat iteration, and \(N\)-dimensional enumeration. Default iteration The default iterator of an ndarray object is the default Python iterator of a sequence type. Thus, when the array object itself is used as an iterator. The default behavior is equivalent to: for i in range(arr.shape[0]): val = arr[i] This default iterator selects a sub-array of dimension \(N-1\) from the array. This can be a useful construct for defining recursive algorithms. To loop over the entire array requires \(N\) for-loops. >>> a = np.arange(24).reshape(3,2,4)+10 >>> for val in a: ... print('item:', val) item: [[10 11 12 13] [14 15 16 17]] item: [[18 19 20 21] [22 23 24 25]] item: [[26 27 28 29] [30 31 32 33]] Flat iteration ndarray.flat A 1-D iterator over the array. As mentioned previously, the flat attribute of ndarray objects returns an iterator that will cycle over the entire array in C-style contiguous order. >>> for i, val in enumerate(a.flat): ... if i%5 == 0: print(i, val) 0 10 5 15 10 20 15 25 20 30 Here, I’ve used the built-in enumerate iterator to return the iterator index as well as the value. N-dimensional enumeration ndenumerate(arr) Multidimensional index iterator. Sometimes it may be useful to get the N-dimensional index while iterating. The ndenumerate iterator can achieve this. >>> for i, val in np.ndenumerate(a): ... if sum(i)%5 == 0: print(i, val) (0, 0, 0) 10 (1, 1, 3) 25 (2, 0, 3) 29 (2, 1, 2) 32 Iterator for broadcasting broadcast Produce an object that mimics broadcasting. The general concept of broadcasting is also available from Python using the broadcast iterator. This object takes \(N\) objects as inputs and returns an iterator that returns tuples providing each of the input sequence elements in the broadcasted result. >>> for val in np.broadcast([[1,0],[2,3]],[0,1]): ... print(val) (1, 0) (0, 1) (2, 0) (3, 1)
numpy.reference.arrays.classes
class.__array_wrap__(array, context=None) At the end of every ufunc, this method is called on the input object with the highest array priority, or the output object if one was specified. The ufunc-computed array is passed in and whatever is returned is passed to the user. Subclasses inherit a default implementation of this method, which transforms the array into a new instance of the object’s class. Subclasses may opt to use this method to transform the output array into an instance of the subclass and update metadata before returning the array to the user. Note For ufuncs, it is hoped to eventually deprecate this method in favour of __array_ufunc__.
numpy.reference.arrays.classes#numpy.class.__array_wrap__
NumPy Distutils - Users Guide SciPy structure Currently SciPy project consists of two packages: NumPy — it provides packages like: numpy.distutils - extension to Python distutils numpy.f2py - a tool to bind Fortran/C codes to Python numpy.core - future replacement of Numeric and numarray packages numpy.lib - extra utility functions numpy.testing - numpy-style tools for unit testing etc SciPy — a collection of scientific tools for Python. The aim of this document is to describe how to add new tools to SciPy. Requirements for SciPy packages SciPy consists of Python packages, called SciPy packages, that are available to Python users via the scipy namespace. Each SciPy package may contain other SciPy packages. And so on. Therefore, the SciPy directory tree is a tree of packages with arbitrary depth and width. Any SciPy package may depend on NumPy packages but the dependence on other SciPy packages should be kept minimal or zero. A SciPy package contains, in addition to its sources, the following files and directories: setup.py — building script __init__.py — package initializer tests/ — directory of unittests Their contents are described below. The setup.py file In order to add a Python package to SciPy, its build script (setup.py) must meet certain requirements. The most important requirement is that the package define a configuration(parent_package='',top_path=None) function which returns a dictionary suitable for passing to numpy.distutils.core.setup(..). To simplify the construction of this dictionary, numpy.distutils.misc_util provides the Configuration class, described below. SciPy pure Python package example Below is an example of a minimal setup.py file for a pure SciPy package: #!/usr/bin/env python3 def configuration(parent_package='',top_path=None): from numpy.distutils.misc_util import Configuration config = Configuration('mypackage',parent_package,top_path) return config if __name__ == "__main__": from numpy.distutils.core import setup #setup(**configuration(top_path='').todict()) setup(configuration=configuration) The arguments of the configuration function specify the name of parent SciPy package (parent_package) and the directory location of the main setup.py script (top_path). These arguments, along with the name of the current package, should be passed to the Configuration constructor. The Configuration constructor has a fourth optional argument, package_path, that can be used when package files are located in a different location than the directory of the setup.py file. Remaining Configuration arguments are all keyword arguments that will be used to initialize attributes of Configuration instance. Usually, these keywords are the same as the ones that setup(..) function would expect, for example, packages, ext_modules, data_files, include_dirs, libraries, headers, scripts, package_dir, etc. However, the direct specification of these keywords is not recommended as the content of these keyword arguments will not be processed or checked for the consistency of SciPy building system. Finally, Configuration has .todict() method that returns all the configuration data as a dictionary suitable for passing on to the setup(..) function. Configuration instance attributes In addition to attributes that can be specified via keyword arguments to Configuration constructor, Configuration instance (let us denote as config) has the following attributes that can be useful in writing setup scripts: config.name - full name of the current package. The names of parent packages can be extracted as config.name.split('.'). config.local_path - path to the location of current setup.py file. config.top_path - path to the location of main setup.py file. Configuration instance methods config.todict() — returns configuration dictionary suitable for passing to numpy.distutils.core.setup(..) function. config.paths(*paths) --- applies ``glob.glob(..) to items of paths if necessary. Fixes paths item that is relative to config.local_path. config.get_subpackage(subpackage_name,subpackage_path=None) — returns a list of subpackage configurations. Subpackage is looked in the current directory under the name subpackage_name but the path can be specified also via optional subpackage_path argument. If subpackage_name is specified as None then the subpackage name will be taken the basename of subpackage_path. Any * used for subpackage names are expanded as wildcards. config.add_subpackage(subpackage_name,subpackage_path=None) — add SciPy subpackage configuration to the current one. The meaning and usage of arguments is explained above, see config.get_subpackage() method. config.add_data_files(*files) — prepend files to data_files list. If files item is a tuple then its first element defines the suffix of where data files are copied relative to package installation directory and the second element specifies the path to data files. By default data files are copied under package installation directory. For example, config.add_data_files('foo.dat', ('fun',['gun.dat','nun/pun.dat','/tmp/sun.dat']), 'bar/car.dat'. '/full/path/to/can.dat', ) will install data files to the following locations <installation path of config.name package>/ foo.dat fun/ gun.dat pun.dat sun.dat bar/ car.dat can.dat Path to data files can be a function taking no arguments and returning path(s) to data files – this is a useful when data files are generated while building the package. (XXX: explain the step when this function are called exactly) config.add_data_dir(data_path) — add directory data_path recursively to data_files. The whole directory tree starting at data_path will be copied under package installation directory. If data_path is a tuple then its first element defines the suffix of where data files are copied relative to package installation directory and the second element specifies the path to data directory. By default, data directory are copied under package installation directory under the basename of data_path. For example, config.add_data_dir('fun') # fun/ contains foo.dat bar/car.dat config.add_data_dir(('sun','fun')) config.add_data_dir(('gun','/full/path/to/fun')) will install data files to the following locations <installation path of config.name package>/ fun/ foo.dat bar/ car.dat sun/ foo.dat bar/ car.dat gun/ foo.dat bar/ car.dat config.add_include_dirs(*paths) — prepend paths to include_dirs list. This list will be visible to all extension modules of the current package. config.add_headers(*files) — prepend files to headers list. By default, headers will be installed under <prefix>/include/pythonX.X/<config.name.replace('.','/')>/ directory. If files item is a tuple then it’s first argument specifies the installation suffix relative to <prefix>/include/pythonX.X/ path. This is a Python distutils method; its use is discouraged for NumPy and SciPy in favour of config.add_data_files(*files). config.add_scripts(*files) — prepend files to scripts list. Scripts will be installed under <prefix>/bin/ directory. config.add_extension(name,sources,**kw) — create and add an Extension instance to ext_modules list. The first argument name defines the name of the extension module that will be installed under config.name package. The second argument is a list of sources. add_extension method takes also keyword arguments that are passed on to the Extension constructor. The list of allowed keywords is the following: include_dirs, define_macros, undef_macros, library_dirs, libraries, runtime_library_dirs, extra_objects, extra_compile_args, extra_link_args, export_symbols, swig_opts, depends, language, f2py_options, module_dirs, extra_info, extra_f77_compile_args, extra_f90_compile_args. Note that config.paths method is applied to all lists that may contain paths. extra_info is a dictionary or a list of dictionaries that content will be appended to keyword arguments. The list depends contains paths to files or directories that the sources of the extension module depend on. If any path in the depends list is newer than the extension module, then the module will be rebuilt. The list of sources may contain functions (‘source generators’) with a pattern def <funcname>(ext, build_dir): return <source(s) or None>. If funcname returns None, no sources are generated. And if the Extension instance has no sources after processing all source generators, no extension module will be built. This is the recommended way to conditionally define extension modules. Source generator functions are called by the build_src sub-command of numpy.distutils. For example, here is a typical source generator function: def generate_source(ext,build_dir): import os from distutils.dep_util import newer target = os.path.join(build_dir,'somesource.c') if newer(target,__file__): # create target file return target The first argument contains the Extension instance that can be useful to access its attributes like depends, sources, etc. lists and modify them during the building process. The second argument gives a path to a build directory that must be used when creating files to a disk. config.add_library(name, sources, **build_info) — add a library to libraries list. Allowed keywords arguments are depends, macros, include_dirs, extra_compiler_args, f2py_options, extra_f77_compile_args, extra_f90_compile_args. See .add_extension() method for more information on arguments. config.have_f77c() — return True if Fortran 77 compiler is available (read: a simple Fortran 77 code compiled successfully). config.have_f90c() — return True if Fortran 90 compiler is available (read: a simple Fortran 90 code compiled successfully). config.get_version() — return version string of the current package, None if version information could not be detected. This methods scans files __version__.py, <packagename>_version.py, version.py, __svn_version__.py for string variables version, __version__, <packagename>_version. config.make_svn_version_py() — appends a data function to data_files list that will generate __svn_version__.py file to the current package directory. The file will be removed from the source directory when Python exits. config.get_build_temp_dir() — return a path to a temporary directory. This is the place where one should build temporary files. config.get_distribution() — return distutils Distribution instance. config.get_config_cmd() — returns numpy.distutils config command instance. config.get_info(*names) — Conversion of .src files using Templates NumPy distutils supports automatic conversion of source files named <somefile>.src. This facility can be used to maintain very similar code blocks requiring only simple changes between blocks. During the build phase of setup, if a template file named <somefile>.src is encountered, a new file named <somefile> is constructed from the template and placed in the build directory to be used instead. Two forms of template conversion are supported. The first form occurs for files named <file>.ext.src where ext is a recognized Fortran extension (f, f90, f95, f77, for, ftn, pyf). The second form is used for all other cases. Fortran files This template converter will replicate all function and subroutine blocks in the file with names that contain ‘<…>’ according to the rules in ‘<…>’. The number of comma-separated words in ‘<…>’ determines the number of times the block is repeated. What these words are indicates what that repeat rule, ‘<…>’, should be replaced with in each block. All of the repeat rules in a block must contain the same number of comma-separated words indicating the number of times that block should be repeated. If the word in the repeat rule needs a comma, leftarrow, or rightarrow, then prepend it with a backslash ‘ '. If a word in the repeat rule matches ‘ \<index>’ then it will be replaced with the <index>-th word in the same repeat specification. There are two forms for the repeat rule: named and short. Named repeat rule A named repeat rule is useful when the same set of repeats must be used several times in a block. It is specified using <rule1=item1, item2, item3,…, itemN>, where N is the number of times the block should be repeated. On each repeat of the block, the entire expression, ‘<…>’ will be replaced first with item1, and then with item2, and so forth until N repeats are accomplished. Once a named repeat specification has been introduced, the same repeat rule may be used in the current block by referring only to the name (i.e. <rule1>). Short repeat rule A short repeat rule looks like <item1, item2, item3, …, itemN>. The rule specifies that the entire expression, ‘<…>’ should be replaced first with item1, and then with item2, and so forth until N repeats are accomplished. Pre-defined names The following predefined named repeat rules are available: <prefix=s,d,c,z> <_c=s,d,c,z> <_t=real, double precision, complex, double complex> <ftype=real, double precision, complex, double complex> <ctype=float, double, complex_float, complex_double> <ftypereal=float, double precision, \0, \1> <ctypereal=float, double, \0, \1> Other files Non-Fortran files use a separate syntax for defining template blocks that should be repeated using a variable expansion similar to the named repeat rules of the Fortran-specific repeats. NumPy Distutils preprocesses C source files (extension: .c.src) written in a custom templating language to generate C code. The @ symbol is used to wrap macro-style variables to empower a string substitution mechanism that might describe (for instance) a set of data types. The template language blocks are delimited by /**begin repeat and /**end repeat**/ lines, which may also be nested using consecutively numbered delimiting lines such as /**begin repeat1 and /**end repeat1**/: /**begin repeat on a line by itself marks the beginning of a segment that should be repeated. Named variable expansions are defined using #name=item1, item2, item3, ..., itemN# and placed on successive lines. These variables are replaced in each repeat block with corresponding word. All named variables in the same repeat block must define the same number of words. In specifying the repeat rule for a named variable, item*N is short- hand for item, item, ..., item repeated N times. In addition, parenthesis in combination with *N can be used for grouping several items that should be repeated. Thus, #name=(item1, item2)*4# is equivalent to #name=item1, item2, item1, item2, item1, item2, item1, item2#. */ on a line by itself marks the end of the variable expansion naming. The next line is the first line that will be repeated using the named rules. Inside the block to be repeated, the variables that should be expanded are specified as @name@. /**end repeat**/ on a line by itself marks the previous line as the last line of the block to be repeated. A loop in the NumPy C source code may have a @TYPE@ variable, targeted for string substitution, which is preprocessed to a number of otherwise identical loops with several strings such as INT, LONG, UINT, ULONG. The @TYPE@ style syntax thus reduces code duplication and maintenance burden by mimicking languages that have generic type support. The above rules may be clearer in the following template source example: 1 /* TIMEDELTA to non-float types */ 2 3 /**begin repeat 4 * 5 * #TOTYPE = BYTE, UBYTE, SHORT, USHORT, INT, UINT, LONG, ULONG, 6 * LONGLONG, ULONGLONG, DATETIME, 7 * TIMEDELTA# 8 * #totype = npy_byte, npy_ubyte, npy_short, npy_ushort, npy_int, npy_uint, 9 * npy_long, npy_ulong, npy_longlong, npy_ulonglong, 10 * npy_datetime, npy_timedelta# 11 */ 12 13 /**begin repeat1 14 * 15 * #FROMTYPE = TIMEDELTA# 16 * #fromtype = npy_timedelta# 17 */ 18 static void 19 @FROMTYPE@_to_@TOTYPE@(void *input, void *output, npy_intp n, 20 void *NPY_UNUSED(aip), void *NPY_UNUSED(aop)) 21 { 22 const @fromtype@ *ip = input; 23 @totype@ *op = output; 24 25 while (n--) { 26 *op++ = (@totype@)*ip++; 27 } 28 } 29 /**end repeat1**/ 30 31 /**end repeat**/ The preprocessing of generically-typed C source files (whether in NumPy proper or in any third party package using NumPy Distutils) is performed by conv_template.py. The type-specific C files generated (extension: .c) by these modules during the build process are ready to be compiled. This form of generic typing is also supported for C header files (preprocessed to produce .h files). Useful functions in numpy.distutils.misc_util get_numpy_include_dirs() — return a list of NumPy base include directories. NumPy base include directories contain header files such as numpy/arrayobject.h, numpy/funcobject.h etc. For installed NumPy the returned list has length 1 but when building NumPy the list may contain more directories, for example, a path to config.h file that numpy/base/setup.py file generates and is used by numpy header files. append_path(prefix,path) — smart append path to prefix. gpaths(paths, local_path='') — apply glob to paths and prepend local_path if needed. njoin(*path) — join pathname components + convert /-separated path to os.sep-separated path and resolve .., . from paths. Ex. njoin('a',['b','./c'],'..','g') -> os.path.join('a','b','g'). minrelpath(path) — resolves dots in path. rel_path(path, parent_path) — return path relative to parent_path. def get_cmd(cmdname,_cache={}) — returns numpy.distutils command instance. all_strings(lst) has_f_sources(sources) has_cxx_sources(sources) filter_sources(sources) — return c_sources, cxx_sources, f_sources, fmodule_sources get_dependencies(sources) is_local_src_dir(directory) get_ext_source_files(ext) get_script_files(scripts) get_lib_source_files(lib) get_data_files(data) dot_join(*args) — join non-zero arguments with a dot. get_frame(level=0) — return frame object from call stack with given level. cyg2win32(path) mingw32() — return True when using mingw32 environment. terminal_has_colors(), red_text(s), green_text(s), yellow_text(s), blue_text(s), cyan_text(s) get_path(mod_name,parent_path=None) — return path of a module relative to parent_path when given. Handles also __main__ and __builtin__ modules. allpath(name) — replaces / with os.sep in name. cxx_ext_match, fortran_ext_match, f90_ext_match, f90_module_name_match numpy.distutils.system_info module get_info(name,notfound_action=0) combine_paths(*args,**kws) show_all() numpy.distutils.cpuinfo module cpuinfo numpy.distutils.log module set_verbosity(v) numpy.distutils.exec_command module get_pythonexe() find_executable(exe, path=None) exec_command( command, execute_in='', use_shell=None, use_tee=None, **env ) The __init__.py file The header of a typical SciPy __init__.py is: """ Package docstring, typically with a brief description and function listing. """ # import functions into module namespace from .subpackage import * ... __all__ = [s for s in dir() if not s.startswith('_')] from numpy.testing import Tester test = Tester().test bench = Tester().bench Extra features in NumPy Distutils Specifying config_fc options for libraries in setup.py script It is possible to specify config_fc options in setup.py scripts. For example, using config.add_library(‘library’, sources=[…], config_fc={‘noopt’:(__file__,1)}) will compile the library sources without optimization flags. It’s recommended to specify only those config_fc options in such a way that are compiler independent. Getting extra Fortran 77 compiler options from source Some old Fortran codes need special compiler options in order to work correctly. In order to specify compiler options per source file, numpy.distutils Fortran compiler looks for the following pattern: CF77FLAGS(<fcompiler type>) = <fcompiler f77flags> in the first 20 lines of the source and use the f77flags for specified type of the fcompiler (the first character C is optional). TODO: This feature can be easily extended for Fortran 90 codes as well. Let us know if you would need such a feature.
numpy.reference.distutils_guide
numpy.core.defchararray.array core.defchararray.array(obj, itemsize=None, copy=True, unicode=None, order=None)[source] Create a chararray. Note This class is provided for numarray backward-compatibility. New code (not concerned with numarray compatibility) should use arrays of type string_ or unicode_ and use the free functions in numpy.char for fast vectorized string operations instead. Versus a regular NumPy array of type str or unicode, this class adds the following functionality: values automatically have whitespace removed from the end when indexed comparison operators automatically remove whitespace from the end when comparing values vectorized string operations are provided as methods (e.g. str.endswith) and infix operators (e.g. +, *, %) Parameters objarray of str or unicode-like itemsizeint, optional itemsize is the number of characters per scalar in the resulting array. If itemsize is None, and obj is an object array or a Python list, the itemsize will be automatically determined. If itemsize is provided and obj is of type str or unicode, then the obj string will be chunked into itemsize pieces. copybool, optional If true (default), then the object is copied. Otherwise, a copy will only be made if __array__ returns a copy, if obj is a nested sequence, or if a copy is needed to satisfy any of the other requirements (itemsize, unicode, order, etc.). unicodebool, optional When true, the resulting chararray can contain Unicode characters, when false only 8-bit characters. If unicode is None and obj is one of the following: a chararray, an ndarray of type str or unicode a Python str or unicode object, then the unicode setting of the output array will be automatically determined. order{‘C’, ‘F’, ‘A’}, optional Specify the order of the array. If order is ‘C’ (default), then the array will be in C-contiguous order (last-index varies the fastest). If order is ‘F’, then the returned array will be in Fortran-contiguous order (first-index varies the fastest). If order is ‘A’, then the returned array may be in any order (either C-, Fortran-contiguous, or even discontiguous).
numpy.reference.generated.numpy.core.defchararray.array
numpy.core.defchararray.asarray core.defchararray.asarray(obj, itemsize=None, unicode=None, order=None)[source] Convert the input to a chararray, copying the data only if necessary. Versus a regular NumPy array of type str or unicode, this class adds the following functionality: values automatically have whitespace removed from the end when indexed comparison operators automatically remove whitespace from the end when comparing values vectorized string operations are provided as methods (e.g. str.endswith) and infix operators (e.g. +, *,``%``) Parameters objarray of str or unicode-like itemsizeint, optional itemsize is the number of characters per scalar in the resulting array. If itemsize is None, and obj is an object array or a Python list, the itemsize will be automatically determined. If itemsize is provided and obj is of type str or unicode, then the obj string will be chunked into itemsize pieces. unicodebool, optional When true, the resulting chararray can contain Unicode characters, when false only 8-bit characters. If unicode is None and obj is one of the following: a chararray, an ndarray of type str or ‘unicode` a Python str or unicode object, then the unicode setting of the output array will be automatically determined. order{‘C’, ‘F’}, optional Specify the order of the array. If order is ‘C’ (default), then the array will be in C-contiguous order (last-index varies the fastest). If order is ‘F’, then the returned array will be in Fortran-contiguous order (first-index varies the fastest).
numpy.reference.generated.numpy.core.defchararray.asarray
numpy.core.records.array core.records.array(obj, dtype=None, shape=None, offset=0, strides=None, formats=None, names=None, titles=None, aligned=False, byteorder=None, copy=True)[source] Construct a record array from a wide-variety of objects. A general-purpose record array constructor that dispatches to the appropriate recarray creation function based on the inputs (see Notes). Parameters objany Input object. See Notes for details on how various input types are treated. dtypedata-type, optional Valid dtype for array. shapeint or tuple of ints, optional Shape of each array. offsetint, optional Position in the file or buffer to start reading from. stridestuple of ints, optional Buffer (buf) is interpreted according to these strides (strides define how many bytes each array element, row, column, etc. occupy in memory). formats, names, titles, aligned, byteorder : If dtype is None, these arguments are passed to numpy.format_parser to construct a dtype. See that function for detailed documentation. copybool, optional Whether to copy the input object (True), or to use a reference instead. This option only applies when the input is an ndarray or recarray. Defaults to True. Returns np.recarray Record array created from the specified object. Notes If obj is None, then call the recarray constructor. If obj is a string, then call the fromstring constructor. If obj is a list or a tuple, then if the first object is an ndarray, call fromarrays, otherwise call fromrecords. If obj is a recarray, then make a copy of the data in the recarray (if copy=True) and use the new formats, names, and titles. If obj is a file, then call fromfile. Finally, if obj is an ndarray, then return obj.view(recarray), making a copy of the data if copy=True. Examples >>> a = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) >>> np.core.records.array(a) rec.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]], dtype=int32) >>> b = [(1, 1), (2, 4), (3, 9)] >>> c = np.core.records.array(b, formats = ['i2', 'f2'], names = ('x', 'y')) >>> c rec.array([(1, 1.0), (2, 4.0), (3, 9.0)], dtype=[('x', '<i2'), ('y', '<f2')]) >>> c.x rec.array([1, 2, 3], dtype=int16) >>> c.y rec.array([ 1.0, 4.0, 9.0], dtype=float16) >>> r = np.rec.array(['abc','def'], names=['col1','col2']) >>> print(r.col1) abc >>> r.col1 array('abc', dtype='<U3') >>> r.col2 array('def', dtype='<U3')
numpy.reference.generated.numpy.core.records.array
numpy.core.records.fromarrays core.records.fromarrays(arrayList, dtype=None, shape=None, formats=None, names=None, titles=None, aligned=False, byteorder=None)[source] Create a record array from a (flat) list of arrays Parameters arrayListlist or tuple List of array-like objects (such as lists, tuples, and ndarrays). dtypedata-type, optional valid dtype for all arrays shapeint or tuple of ints, optional Shape of the resulting array. If not provided, inferred from arrayList[0]. formats, names, titles, aligned, byteorder : If dtype is None, these arguments are passed to numpy.format_parser to construct a dtype. See that function for detailed documentation. Returns np.recarray Record array consisting of given arrayList columns. Examples >>> x1=np.array([1,2,3,4]) >>> x2=np.array(['a','dd','xyz','12']) >>> x3=np.array([1.1,2,3,4]) >>> r = np.core.records.fromarrays([x1,x2,x3],names='a,b,c') >>> print(r[1]) (2, 'dd', 2.0) # may vary >>> x1[1]=34 >>> r.a array([1, 2, 3, 4]) >>> x1 = np.array([1, 2, 3, 4]) >>> x2 = np.array(['a', 'dd', 'xyz', '12']) >>> x3 = np.array([1.1, 2, 3,4]) >>> r = np.core.records.fromarrays( ... [x1, x2, x3], ... dtype=np.dtype([('a', np.int32), ('b', 'S3'), ('c', np.float32)])) >>> r rec.array([(1, b'a', 1.1), (2, b'dd', 2. ), (3, b'xyz', 3. ), (4, b'12', 4. )], dtype=[('a', '<i4'), ('b', 'S3'), ('c', '<f4')])
numpy.reference.generated.numpy.core.records.fromarrays
numpy.core.records.fromfile core.records.fromfile(fd, dtype=None, shape=None, offset=0, formats=None, names=None, titles=None, aligned=False, byteorder=None)[source] Create an array from binary file data Parameters fdstr or file type If file is a string or a path-like object then that file is opened, else it is assumed to be a file object. The file object must support random access (i.e. it must have tell and seek methods). dtypedata-type, optional valid dtype for all arrays shapeint or tuple of ints, optional shape of each array. offsetint, optional Position in the file to start reading from. formats, names, titles, aligned, byteorder : If dtype is None, these arguments are passed to numpy.format_parser to construct a dtype. See that function for detailed documentation Returns np.recarray record array consisting of data enclosed in file. Examples >>> from tempfile import TemporaryFile >>> a = np.empty(10,dtype='f8,i4,a5') >>> a[5] = (0.5,10,'abcde') >>> >>> fd=TemporaryFile() >>> a = a.newbyteorder('<') >>> a.tofile(fd) >>> >>> _ = fd.seek(0) >>> r=np.core.records.fromfile(fd, formats='f8,i4,a5', shape=10, ... byteorder='<') >>> print(r[5]) (0.5, 10, 'abcde') >>> r.shape (10,)
numpy.reference.generated.numpy.core.records.fromfile
numpy.core.records.fromrecords core.records.fromrecords(recList, dtype=None, shape=None, formats=None, names=None, titles=None, aligned=False, byteorder=None)[source] Create a recarray from a list of records in text form. Parameters recListsequence data in the same field may be heterogeneous - they will be promoted to the highest data type. dtypedata-type, optional valid dtype for all arrays shapeint or tuple of ints, optional shape of each array. formats, names, titles, aligned, byteorder : If dtype is None, these arguments are passed to numpy.format_parser to construct a dtype. See that function for detailed documentation. If both formats and dtype are None, then this will auto-detect formats. Use list of tuples rather than list of lists for faster processing. Returns np.recarray record array consisting of given recList rows. Examples >>> r=np.core.records.fromrecords([(456,'dbe',1.2),(2,'de',1.3)], ... names='col1,col2,col3') >>> print(r[0]) (456, 'dbe', 1.2) >>> r.col1 array([456, 2]) >>> r.col2 array(['dbe', 'de'], dtype='<U3') >>> import pickle >>> pickle.loads(pickle.dumps(r)) rec.array([(456, 'dbe', 1.2), ( 2, 'de', 1.3)], dtype=[('col1', '<i8'), ('col2', '<U3'), ('col3', '<f8')])
numpy.reference.generated.numpy.core.records.fromrecords
numpy.core.records.fromstring core.records.fromstring(datastring, dtype=None, shape=None, offset=0, formats=None, names=None, titles=None, aligned=False, byteorder=None)[source] Create a record array from binary data Note that despite the name of this function it does not accept str instances. Parameters datastringbytes-like Buffer of binary data dtypedata-type, optional Valid dtype for all arrays shapeint or tuple of ints, optional Shape of each array. offsetint, optional Position in the buffer to start reading from. formats, names, titles, aligned, byteorder : If dtype is None, these arguments are passed to numpy.format_parser to construct a dtype. See that function for detailed documentation. Returns np.recarray Record array view into the data in datastring. This will be readonly if datastring is readonly. See also numpy.frombuffer Examples >>> a = b'\x01\x02\x03abc' >>> np.core.records.fromstring(a, dtype='u1,u1,u1,S3') rec.array([(1, 2, 3, b'abc')], dtype=[('f0', 'u1'), ('f1', 'u1'), ('f2', 'u1'), ('f3', 'S3')]) >>> grades_dtype = [('Name', (np.str_, 10)), ('Marks', np.float64), ... ('GradeLevel', np.int32)] >>> grades_array = np.array([('Sam', 33.3, 3), ('Mike', 44.4, 5), ... ('Aadi', 66.6, 6)], dtype=grades_dtype) >>> np.core.records.fromstring(grades_array.tobytes(), dtype=grades_dtype) rec.array([('Sam', 33.3, 3), ('Mike', 44.4, 5), ('Aadi', 66.6, 6)], dtype=[('Name', '<U10'), ('Marks', '<f8'), ('GradeLevel', '<i4')]) >>> s = '\x01\x02\x03abc' >>> np.core.records.fromstring(s, dtype='u1,u1,u1,S3') Traceback (most recent call last) ... TypeError: a bytes-like object is required, not 'str'
numpy.reference.generated.numpy.core.records.fromstring
Discrete Fourier Transform (numpy.fft) The SciPy module scipy.fft is a more comprehensive superset of numpy.fft, which includes only a basic set of routines. Standard FFTs fft(a[, n, axis, norm]) Compute the one-dimensional discrete Fourier Transform. ifft(a[, n, axis, norm]) Compute the one-dimensional inverse discrete Fourier Transform. fft2(a[, s, axes, norm]) Compute the 2-dimensional discrete Fourier Transform. ifft2(a[, s, axes, norm]) Compute the 2-dimensional inverse discrete Fourier Transform. fftn(a[, s, axes, norm]) Compute the N-dimensional discrete Fourier Transform. ifftn(a[, s, axes, norm]) Compute the N-dimensional inverse discrete Fourier Transform. Real FFTs rfft(a[, n, axis, norm]) Compute the one-dimensional discrete Fourier Transform for real input. irfft(a[, n, axis, norm]) Computes the inverse of rfft. rfft2(a[, s, axes, norm]) Compute the 2-dimensional FFT of a real array. irfft2(a[, s, axes, norm]) Computes the inverse of rfft2. rfftn(a[, s, axes, norm]) Compute the N-dimensional discrete Fourier Transform for real input. irfftn(a[, s, axes, norm]) Computes the inverse of rfftn. Hermitian FFTs hfft(a[, n, axis, norm]) Compute the FFT of a signal that has Hermitian symmetry, i.e., a real spectrum. ihfft(a[, n, axis, norm]) Compute the inverse FFT of a signal that has Hermitian symmetry. Helper routines fftfreq(n[, d]) Return the Discrete Fourier Transform sample frequencies. rfftfreq(n[, d]) Return the Discrete Fourier Transform sample frequencies (for usage with rfft, irfft). fftshift(x[, axes]) Shift the zero-frequency component to the center of the spectrum. ifftshift(x[, axes]) The inverse of fftshift. Background information Fourier analysis is fundamentally a method for expressing a function as a sum of periodic components, and for recovering the function from those components. When both the function and its Fourier transform are replaced with discretized counterparts, it is called the discrete Fourier transform (DFT). The DFT has become a mainstay of numerical computing in part because of a very fast algorithm for computing it, called the Fast Fourier Transform (FFT), which was known to Gauss (1805) and was brought to light in its current form by Cooley and Tukey [CT]. Press et al. [NR] provide an accessible introduction to Fourier analysis and its applications. Because the discrete Fourier transform separates its input into components that contribute at discrete frequencies, it has a great number of applications in digital signal processing, e.g., for filtering, and in this context the discretized input to the transform is customarily referred to as a signal, which exists in the time domain. The output is called a spectrum or transform and exists in the frequency domain. Implementation details There are many ways to define the DFT, varying in the sign of the exponent, normalization, etc. In this implementation, the DFT is defined as \[A_k = \sum_{m=0}^{n-1} a_m \exp\left\{-2\pi i{mk \over n}\right\} \qquad k = 0,\ldots,n-1.\] The DFT is in general defined for complex inputs and outputs, and a single-frequency component at linear frequency \(f\) is represented by a complex exponential \(a_m = \exp\{2\pi i\,f m\Delta t\}\), where \(\Delta t\) is the sampling interval. The values in the result follow so-called “standard” order: If A = fft(a, n), then A[0] contains the zero-frequency term (the sum of the signal), which is always purely real for real inputs. Then A[1:n/2] contains the positive-frequency terms, and A[n/2+1:] contains the negative-frequency terms, in order of decreasingly negative frequency. For an even number of input points, A[n/2] represents both positive and negative Nyquist frequency, and is also purely real for real input. For an odd number of input points, A[(n-1)/2] contains the largest positive frequency, while A[(n+1)/2] contains the largest negative frequency. The routine np.fft.fftfreq(n) returns an array giving the frequencies of corresponding elements in the output. The routine np.fft.fftshift(A) shifts transforms and their frequencies to put the zero-frequency components in the middle, and np.fft.ifftshift(A) undoes that shift. When the input a is a time-domain signal and A = fft(a), np.abs(A) is its amplitude spectrum and np.abs(A)**2 is its power spectrum. The phase spectrum is obtained by np.angle(A). The inverse DFT is defined as \[a_m = \frac{1}{n}\sum_{k=0}^{n-1}A_k\exp\left\{2\pi i{mk\over n}\right\} \qquad m = 0,\ldots,n-1.\] It differs from the forward transform by the sign of the exponential argument and the default normalization by \(1/n\). Type Promotion numpy.fft promotes float32 and complex64 arrays to float64 and complex128 arrays respectively. For an FFT implementation that does not promote input arrays, see scipy.fftpack. Normalization The argument norm indicates which direction of the pair of direct/inverse transforms is scaled and with what normalization factor. The default normalization ("backward") has the direct (forward) transforms unscaled and the inverse (backward) transforms scaled by \(1/n\). It is possible to obtain unitary transforms by setting the keyword argument norm to "ortho" so that both direct and inverse transforms are scaled by \(1/\sqrt{n}\). Finally, setting the keyword argument norm to "forward" has the direct transforms scaled by \(1/n\) and the inverse transforms unscaled (i.e. exactly opposite to the default "backward"). None is an alias of the default option "backward" for backward compatibility. Real and Hermitian transforms When the input is purely real, its transform is Hermitian, i.e., the component at frequency \(f_k\) is the complex conjugate of the component at frequency \(-f_k\), which means that for real inputs there is no information in the negative frequency components that is not already available from the positive frequency components. The family of rfft functions is designed to operate on real inputs, and exploits this symmetry by computing only the positive frequency components, up to and including the Nyquist frequency. Thus, n input points produce n/2+1 complex output points. The inverses of this family assumes the same symmetry of its input, and for an output of n points uses n/2+1 input points. Correspondingly, when the spectrum is purely real, the signal is Hermitian. The hfft family of functions exploits this symmetry by using n/2+1 complex points in the input (time) domain for n real points in the frequency domain. In higher dimensions, FFTs are used, e.g., for image analysis and filtering. The computational efficiency of the FFT means that it can also be a faster way to compute large convolutions, using the property that a convolution in the time domain is equivalent to a point-by-point multiplication in the frequency domain. Higher dimensions In two dimensions, the DFT is defined as \[A_{kl} = \sum_{m=0}^{M-1} \sum_{n=0}^{N-1} a_{mn}\exp\left\{-2\pi i \left({mk\over M}+{nl\over N}\right)\right\} \qquad k = 0, \ldots, M-1;\quad l = 0, \ldots, N-1,\] which extends in the obvious way to higher dimensions, and the inverses in higher dimensions also extend in the same way. References CT Cooley, James W., and John W. Tukey, 1965, “An algorithm for the machine calculation of complex Fourier series,” Math. Comput. 19: 297-301. NR Press, W., Teukolsky, S., Vetterline, W.T., and Flannery, B.P., 2007, Numerical Recipes: The Art of Scientific Computing, ch. 12-13. Cambridge Univ. Press, Cambridge, UK. Examples For examples, see the various functions.
numpy.reference.routines.fft
numpy.DataSource.abspath method DataSource.abspath(path)[source] Return absolute path of file in the DataSource directory. If path is an URL, then abspath will return either the location the file exists locally or the location it would exist when opened using the open method. Parameters pathstr Can be a local file or a remote URL. Returns outstr Complete path, including the DataSource destination directory. Notes The functionality is based on os.path.abspath.
numpy.reference.generated.numpy.datasource.abspath
numpy.DataSource.exists method DataSource.exists(path)[source] Test if path exists. Test if path exists as (and in this order): a local file. a remote URL that has been downloaded and stored locally in the DataSource directory. a remote URL that has not been downloaded, but is valid and accessible. Parameters pathstr Can be a local file or a remote URL. Returns outbool True if path exists. Notes When path is an URL, exists will return True if it’s either stored locally in the DataSource directory, or is a valid remote URL. DataSource does not discriminate between the two, the file is accessible if it exists in either location.
numpy.reference.generated.numpy.datasource.exists
numpy.DataSource.open method DataSource.open(path, mode='r', encoding=None, newline=None)[source] Open and return file-like object. If path is an URL, it will be downloaded, stored in the DataSource directory and opened from there. Parameters pathstr Local file path or URL to open. mode{‘r’, ‘w’, ‘a’}, optional Mode to open path. Mode ‘r’ for reading, ‘w’ for writing, ‘a’ to append. Available modes depend on the type of object specified by path. Default is ‘r’. encoding{None, str}, optional Open text file with given encoding. The default encoding will be what io.open uses. newline{None, str}, optional Newline to use when reading text file. Returns outfile object File object.
numpy.reference.generated.numpy.datasource.open
Importing data with genfromtxt NumPy provides several functions to create arrays from tabular data. We focus here on the genfromtxt function. In a nutshell, genfromtxt runs two main loops. The first loop converts each line of the file in a sequence of strings. The second loop converts each string to the appropriate data type. This mechanism is slower than a single loop, but gives more flexibility. In particular, genfromtxt is able to take missing data into account, when other faster and simpler functions like loadtxt cannot. Note When giving examples, we will use the following conventions: >>> import numpy as np >>> from io import StringIO Defining the input The only mandatory argument of genfromtxt is the source of the data. It can be a string, a list of strings, a generator or an open file-like object with a read method, for example, a file or io.StringIO object. If a single string is provided, it is assumed to be the name of a local or remote file. If a list of strings or a generator returning strings is provided, each string is treated as one line in a file. When the URL of a remote file is passed, the file is automatically downloaded to the current directory and opened. Recognized file types are text files and archives. Currently, the function recognizes gzip and bz2 (bzip2) archives. The type of the archive is determined from the extension of the file: if the filename ends with '.gz', a gzip archive is expected; if it ends with 'bz2', a bzip2 archive is assumed. Splitting the lines into columns The delimiter argument Once the file is defined and open for reading, genfromtxt splits each non-empty line into a sequence of strings. Empty or commented lines are just skipped. The delimiter keyword is used to define how the splitting should take place. Quite often, a single character marks the separation between columns. For example, comma-separated files (CSV) use a comma (,) or a semicolon (;) as delimiter: >>> data = u"1, 2, 3\n4, 5, 6" >>> np.genfromtxt(StringIO(data), delimiter=",") array([[ 1., 2., 3.], [ 4., 5., 6.]]) Another common separator is "\t", the tabulation character. However, we are not limited to a single character, any string will do. By default, genfromtxt assumes delimiter=None, meaning that the line is split along white spaces (including tabs) and that consecutive white spaces are considered as a single white space. Alternatively, we may be dealing with a fixed-width file, where columns are defined as a given number of characters. In that case, we need to set delimiter to a single integer (if all the columns have the same size) or to a sequence of integers (if columns can have different sizes): >>> data = u" 1 2 3\n 4 5 67\n890123 4" >>> np.genfromtxt(StringIO(data), delimiter=3) array([[ 1., 2., 3.], [ 4., 5., 67.], [ 890., 123., 4.]]) >>> data = u"123456789\n 4 7 9\n 4567 9" >>> np.genfromtxt(StringIO(data), delimiter=(4, 3, 2)) array([[ 1234., 567., 89.], [ 4., 7., 9.], [ 4., 567., 9.]]) The autostrip argument By default, when a line is decomposed into a series of strings, the individual entries are not stripped of leading nor trailing white spaces. This behavior can be overwritten by setting the optional argument autostrip to a value of True: >>> data = u"1, abc , 2\n 3, xxx, 4" >>> # Without autostrip >>> np.genfromtxt(StringIO(data), delimiter=",", dtype="|U5") array([['1', ' abc ', ' 2'], ['3', ' xxx', ' 4']], dtype='<U5') >>> # With autostrip >>> np.genfromtxt(StringIO(data), delimiter=",", dtype="|U5", autostrip=True) array([['1', 'abc', '2'], ['3', 'xxx', '4']], dtype='<U5') The comments argument The optional argument comments is used to define a character string that marks the beginning of a comment. By default, genfromtxt assumes comments='#'. The comment marker may occur anywhere on the line. Any character present after the comment marker(s) is simply ignored: >>> data = u"""# ... # Skip me ! ... # Skip me too ! ... 1, 2 ... 3, 4 ... 5, 6 #This is the third line of the data ... 7, 8 ... # And here comes the last line ... 9, 0 ... """ >>> np.genfromtxt(StringIO(data), comments="#", delimiter=",") array([[1., 2.], [3., 4.], [5., 6.], [7., 8.], [9., 0.]]) New in version 1.7.0: When comments is set to None, no lines are treated as comments. Note There is one notable exception to this behavior: if the optional argument names=True, the first commented line will be examined for names. Skipping lines and choosing columns The skip_header and skip_footer arguments The presence of a header in the file can hinder data processing. In that case, we need to use the skip_header optional argument. The values of this argument must be an integer which corresponds to the number of lines to skip at the beginning of the file, before any other action is performed. Similarly, we can skip the last n lines of the file by using the skip_footer attribute and giving it a value of n: >>> data = u"\n".join(str(i) for i in range(10)) >>> np.genfromtxt(StringIO(data),) array([ 0., 1., 2., 3., 4., 5., 6., 7., 8., 9.]) >>> np.genfromtxt(StringIO(data), ... skip_header=3, skip_footer=5) array([ 3., 4.]) By default, skip_header=0 and skip_footer=0, meaning that no lines are skipped. The usecols argument In some cases, we are not interested in all the columns of the data but only a few of them. We can select which columns to import with the usecols argument. This argument accepts a single integer or a sequence of integers corresponding to the indices of the columns to import. Remember that by convention, the first column has an index of 0. Negative integers behave the same as regular Python negative indexes. For example, if we want to import only the first and the last columns, we can use usecols=(0, -1): >>> data = u"1 2 3\n4 5 6" >>> np.genfromtxt(StringIO(data), usecols=(0, -1)) array([[ 1., 3.], [ 4., 6.]]) If the columns have names, we can also select which columns to import by giving their name to the usecols argument, either as a sequence of strings or a comma-separated string: >>> data = u"1 2 3\n4 5 6" >>> np.genfromtxt(StringIO(data), ... names="a, b, c", usecols=("a", "c")) array([(1.0, 3.0), (4.0, 6.0)], dtype=[('a', '<f8'), ('c', '<f8')]) >>> np.genfromtxt(StringIO(data), ... names="a, b, c", usecols=("a, c")) array([(1.0, 3.0), (4.0, 6.0)], dtype=[('a', '<f8'), ('c', '<f8')]) Choosing the data type The main way to control how the sequences of strings we have read from the file are converted to other types is to set the dtype argument. Acceptable values for this argument are: a single type, such as dtype=float. The output will be 2D with the given dtype, unless a name has been associated with each column with the use of the names argument (see below). Note that dtype=float is the default for genfromtxt. a sequence of types, such as dtype=(int, float, float). a comma-separated string, such as dtype="i4,f8,|U3". a dictionary with two keys 'names' and 'formats'. a sequence of tuples (name, type), such as dtype=[('A', int), ('B', float)]. an existing numpy.dtype object. the special value None. In that case, the type of the columns will be determined from the data itself (see below). In all the cases but the first one, the output will be a 1D array with a structured dtype. This dtype has as many fields as items in the sequence. The field names are defined with the names keyword. When dtype=None, the type of each column is determined iteratively from its data. We start by checking whether a string can be converted to a boolean (that is, if the string matches true or false in lower cases); then whether it can be converted to an integer, then to a float, then to a complex and eventually to a string. This behavior may be changed by modifying the default mapper of the StringConverter class. The option dtype=None is provided for convenience. However, it is significantly slower than setting the dtype explicitly. Setting the names The names argument A natural approach when dealing with tabular data is to allocate a name to each column. A first possibility is to use an explicit structured dtype, as mentioned previously: >>> data = StringIO("1 2 3\n 4 5 6") >>> np.genfromtxt(data, dtype=[(_, int) for _ in "abc"]) array([(1, 2, 3), (4, 5, 6)], dtype=[('a', '<i8'), ('b', '<i8'), ('c', '<i8')]) Another simpler possibility is to use the names keyword with a sequence of strings or a comma-separated string: >>> data = StringIO("1 2 3\n 4 5 6") >>> np.genfromtxt(data, names="A, B, C") array([(1.0, 2.0, 3.0), (4.0, 5.0, 6.0)], dtype=[('A', '<f8'), ('B', '<f8'), ('C', '<f8')]) In the example above, we used the fact that by default, dtype=float. By giving a sequence of names, we are forcing the output to a structured dtype. We may sometimes need to define the column names from the data itself. In that case, we must use the names keyword with a value of True. The names will then be read from the first line (after the skip_header ones), even if the line is commented out: >>> data = StringIO("So it goes\n#a b c\n1 2 3\n 4 5 6") >>> np.genfromtxt(data, skip_header=1, names=True) array([(1.0, 2.0, 3.0), (4.0, 5.0, 6.0)], dtype=[('a', '<f8'), ('b', '<f8'), ('c', '<f8')]) The default value of names is None. If we give any other value to the keyword, the new names will overwrite the field names we may have defined with the dtype: >>> data = StringIO("1 2 3\n 4 5 6") >>> ndtype=[('a',int), ('b', float), ('c', int)] >>> names = ["A", "B", "C"] >>> np.genfromtxt(data, names=names, dtype=ndtype) array([(1, 2.0, 3), (4, 5.0, 6)], dtype=[('A', '<i8'), ('B', '<f8'), ('C', '<i8')]) The defaultfmt argument If names=None but a structured dtype is expected, names are defined with the standard NumPy default of "f%i", yielding names like f0, f1 and so forth: >>> data = StringIO("1 2 3\n 4 5 6") >>> np.genfromtxt(data, dtype=(int, float, int)) array([(1, 2.0, 3), (4, 5.0, 6)], dtype=[('f0', '<i8'), ('f1', '<f8'), ('f2', '<i8')]) In the same way, if we don’t give enough names to match the length of the dtype, the missing names will be defined with this default template: >>> data = StringIO("1 2 3\n 4 5 6") >>> np.genfromtxt(data, dtype=(int, float, int), names="a") array([(1, 2.0, 3), (4, 5.0, 6)], dtype=[('a', '<i8'), ('f0', '<f8'), ('f1', '<i8')]) We can overwrite this default with the defaultfmt argument, that takes any format string: >>> data = StringIO("1 2 3\n 4 5 6") >>> np.genfromtxt(data, dtype=(int, float, int), defaultfmt="var_%02i") array([(1, 2.0, 3), (4, 5.0, 6)], dtype=[('var_00', '<i8'), ('var_01', '<f8'), ('var_02', '<i8')]) Note We need to keep in mind that defaultfmt is used only if some names are expected but not defined. Validating names NumPy arrays with a structured dtype can also be viewed as recarray, where a field can be accessed as if it were an attribute. For that reason, we may need to make sure that the field name doesn’t contain any space or invalid character, or that it does not correspond to the name of a standard attribute (like size or shape), which would confuse the interpreter. genfromtxt accepts three optional arguments that provide a finer control on the names: deletechars Gives a string combining all the characters that must be deleted from the name. By default, invalid characters are ~!@#$%^&*()-=+~\|]}[{';: /?.>,<. excludelist Gives a list of the names to exclude, such as return, file, print… If one of the input name is part of this list, an underscore character ('_') will be appended to it. case_sensitive Whether the names should be case-sensitive (case_sensitive=True), converted to upper case (case_sensitive=False or case_sensitive='upper') or to lower case (case_sensitive='lower'). Tweaking the conversion The converters argument Usually, defining a dtype is sufficient to define how the sequence of strings must be converted. However, some additional control may sometimes be required. For example, we may want to make sure that a date in a format YYYY/MM/DD is converted to a datetime object, or that a string like xx% is properly converted to a float between 0 and 1. In such cases, we should define conversion functions with the converters arguments. The value of this argument is typically a dictionary with column indices or column names as keys and a conversion functions as values. These conversion functions can either be actual functions or lambda functions. In any case, they should accept only a string as input and output only a single element of the wanted type. In the following example, the second column is converted from as string representing a percentage to a float between 0 and 1: >>> convertfunc = lambda x: float(x.strip(b"%"))/100. >>> data = u"1, 2.3%, 45.\n6, 78.9%, 0" >>> names = ("i", "p", "n") >>> # General case ..... >>> np.genfromtxt(StringIO(data), delimiter=",", names=names) array([(1., nan, 45.), (6., nan, 0.)], dtype=[('i', '<f8'), ('p', '<f8'), ('n', '<f8')]) We need to keep in mind that by default, dtype=float. A float is therefore expected for the second column. However, the strings ' 2.3%' and ' 78.9%' cannot be converted to float and we end up having np.nan instead. Let’s now use a converter: >>> # Converted case ... >>> np.genfromtxt(StringIO(data), delimiter=",", names=names, ... converters={1: convertfunc}) array([(1.0, 0.023, 45.0), (6.0, 0.78900000000000003, 0.0)], dtype=[('i', '<f8'), ('p', '<f8'), ('n', '<f8')]) The same results can be obtained by using the name of the second column ("p") as key instead of its index (1): >>> # Using a name for the converter ... >>> np.genfromtxt(StringIO(data), delimiter=",", names=names, ... converters={"p": convertfunc}) array([(1.0, 0.023, 45.0), (6.0, 0.78900000000000003, 0.0)], dtype=[('i', '<f8'), ('p', '<f8'), ('n', '<f8')]) Converters can also be used to provide a default for missing entries. In the following example, the converter convert transforms a stripped string into the corresponding float or into -999 if the string is empty. We need to explicitly strip the string from white spaces as it is not done by default: >>> data = u"1, , 3\n 4, 5, 6" >>> convert = lambda x: float(x.strip() or -999) >>> np.genfromtxt(StringIO(data), delimiter=",", ... converters={1: convert}) array([[ 1., -999., 3.], [ 4., 5., 6.]]) Using missing and filling values Some entries may be missing in the dataset we are trying to import. In a previous example, we used a converter to transform an empty string into a float. However, user-defined converters may rapidly become cumbersome to manage. The genfromtxt function provides two other complementary mechanisms: the missing_values argument is used to recognize missing data and a second argument, filling_values, is used to process these missing data. missing_values By default, any empty string is marked as missing. We can also consider more complex strings, such as "N/A" or "???" to represent missing or invalid data. The missing_values argument accepts three kinds of values: a string or a comma-separated string This string will be used as the marker for missing data for all the columns a sequence of strings In that case, each item is associated to a column, in order. a dictionary Values of the dictionary are strings or sequence of strings. The corresponding keys can be column indices (integers) or column names (strings). In addition, the special key None can be used to define a default applicable to all columns. filling_values We know how to recognize missing data, but we still need to provide a value for these missing entries. By default, this value is determined from the expected dtype according to this table: Expected type Default bool False int -1 float np.nan complex np.nan+0j string '???' We can get a finer control on the conversion of missing values with the filling_values optional argument. Like missing_values, this argument accepts different kind of values: a single value This will be the default for all columns a sequence of values Each entry will be the default for the corresponding column a dictionary Each key can be a column index or a column name, and the corresponding value should be a single object. We can use the special key None to define a default for all columns. In the following example, we suppose that the missing values are flagged with "N/A" in the first column and by "???" in the third column. We wish to transform these missing values to 0 if they occur in the first and second column, and to -999 if they occur in the last column: >>> data = u"N/A, 2, 3\n4, ,???" >>> kwargs = dict(delimiter=",", ... dtype=int, ... names="a,b,c", ... missing_values={0:"N/A", 'b':" ", 2:"???"}, ... filling_values={0:0, 'b':0, 2:-999}) >>> np.genfromtxt(StringIO(data), **kwargs) array([(0, 2, 3), (4, 0, -999)], dtype=[('a', '<i8'), ('b', '<i8'), ('c', '<i8')]) usemask We may also want to keep track of the occurrence of missing data by constructing a boolean mask, with True entries where data was missing and False otherwise. To do that, we just have to set the optional argument usemask to True (the default is False). The output array will then be a MaskedArray. Shortcut functions In addition to genfromtxt, the numpy.lib.npyio module provides several convenience functions derived from genfromtxt. These functions work the same way as the original, but they have different default values. recfromtxt Returns a standard numpy.recarray (if usemask=False) or a MaskedRecords array (if usemaske=True). The default dtype is dtype=None, meaning that the types of each column will be automatically determined. recfromcsv Like recfromtxt, but with a default delimiter=",".
numpy.user.basics.io.genfromtxt
numpy.distutils.ccompiler.CCompiler_compile distutils.ccompiler.CCompiler_compile(self, sources, output_dir=None, macros=None, include_dirs=None, debug=0, extra_preargs=None, extra_postargs=None, depends=None)[source] Compile one or more source files. Please refer to the Python distutils API reference for more details. Parameters sourceslist of str A list of filenames output_dirstr, optional Path to the output directory. macroslist of tuples A list of macro definitions. include_dirslist of str, optional The directories to add to the default include file search path for this compilation only. debugbool, optional Whether or not to output debug symbols in or alongside the object file(s). extra_preargs, extra_postargs? Extra pre- and post-arguments. dependslist of str, optional A list of file names that all targets depend on. Returns objectslist of str A list of object file names, one per source file sources. Raises CompileError If compilation fails.
numpy.reference.generated.numpy.distutils.ccompiler.ccompiler_compile
numpy.distutils.ccompiler.CCompiler_customize distutils.ccompiler.CCompiler_customize(self, dist, need_cxx=0)[source] Do any platform-specific customization of a compiler instance. This method calls distutils.sysconfig.customize_compiler for platform-specific customization, as well as optionally remove a flag to suppress spurious warnings in case C++ code is being compiled. Parameters distobject This parameter is not used for anything. need_cxxbool, optional Whether or not C++ has to be compiled. If so (True), the "-Wstrict-prototypes" option is removed to prevent spurious warnings. Default is False. Returns None Notes All the default options used by distutils can be extracted with: from distutils import sysconfig sysconfig.get_config_vars('CC', 'CXX', 'OPT', 'BASECFLAGS', 'CCSHARED', 'LDSHARED', 'SO')
numpy.reference.generated.numpy.distutils.ccompiler.ccompiler_customize
numpy.distutils.ccompiler.CCompiler_customize_cmd distutils.ccompiler.CCompiler_customize_cmd(self, cmd, ignore=())[source] Customize compiler using distutils command. Parameters cmdclass instance An instance inheriting from distutils.cmd.Command. ignoresequence of str, optional List of CCompiler commands (without 'set_') that should not be altered. Strings that are checked for are: ('include_dirs', 'define', 'undef', 'libraries', 'library_dirs', 'rpath', 'link_objects'). Returns None
numpy.reference.generated.numpy.distutils.ccompiler.ccompiler_customize_cmd
numpy.distutils.ccompiler.CCompiler_cxx_compiler distutils.ccompiler.CCompiler_cxx_compiler(self)[source] Return the C++ compiler. Parameters None Returns cxxclass instance The C++ compiler, as a CCompiler instance.
numpy.reference.generated.numpy.distutils.ccompiler.ccompiler_cxx_compiler
numpy.distutils.ccompiler.CCompiler_find_executables distutils.ccompiler.CCompiler_find_executables(self)[source] Does nothing here, but is called by the get_version method and can be overridden by subclasses. In particular it is redefined in the FCompiler class where more documentation can be found.
numpy.reference.generated.numpy.distutils.ccompiler.ccompiler_find_executables
numpy.distutils.ccompiler.CCompiler_get_version distutils.ccompiler.CCompiler_get_version(self, force=False, ok_status=[0])[source] Return compiler version, or None if compiler is not available. Parameters forcebool, optional If True, force a new determination of the version, even if the compiler already has a version attribute. Default is False. ok_statuslist of int, optional The list of status values returned by the version look-up process for which a version string is returned. If the status value is not in ok_status, None is returned. Default is [0]. Returns versionstr or None Version string, in the format of distutils.version.LooseVersion.
numpy.reference.generated.numpy.distutils.ccompiler.ccompiler_get_version
numpy.distutils.ccompiler.CCompiler_object_filenames distutils.ccompiler.CCompiler_object_filenames(self, source_filenames, strip_dir=0, output_dir='')[source] Return the name of the object files for the given source files. Parameters source_filenameslist of str The list of paths to source files. Paths can be either relative or absolute, this is handled transparently. strip_dirbool, optional Whether to strip the directory from the returned paths. If True, the file name prepended by output_dir is returned. Default is False. output_dirstr, optional If given, this path is prepended to the returned paths to the object files. Returns obj_nameslist of str The list of paths to the object files corresponding to the source files in source_filenames.
numpy.reference.generated.numpy.distutils.ccompiler.ccompiler_object_filenames
numpy.distutils.ccompiler.CCompiler_show_customization distutils.ccompiler.CCompiler_show_customization(self)[source] Print the compiler customizations to stdout. Parameters None Returns None Notes Printing is only done if the distutils log threshold is < 2.
numpy.reference.generated.numpy.distutils.ccompiler.ccompiler_show_customization
numpy.distutils.ccompiler.CCompiler_spawn distutils.ccompiler.CCompiler_spawn(self, cmd, display=None, env=None)[source] Execute a command in a sub-process. Parameters cmdstr The command to execute. displaystr or sequence of str, optional The text to add to the log file kept by numpy.distutils. If not given, display is equal to cmd. env: a dictionary for environment variables, optional Returns None Raises DistutilsExecError If the command failed, i.e. the exit status was not 0.
numpy.reference.generated.numpy.distutils.ccompiler.ccompiler_spawn
numpy.distutils.ccompiler.gen_lib_options distutils.ccompiler.gen_lib_options(compiler, library_dirs, runtime_library_dirs, libraries)[source]
numpy.reference.generated.numpy.distutils.ccompiler.gen_lib_options
numpy.distutils.ccompiler.new_compiler distutils.ccompiler.new_compiler(plat=None, compiler=None, verbose=None, dry_run=0, force=0)[source]
numpy.reference.generated.numpy.distutils.ccompiler.new_compiler
numpy.distutils.ccompiler.replace_method distutils.ccompiler.replace_method(klass, method_name, func)[source]
numpy.reference.generated.numpy.distutils.ccompiler.replace_method
numpy.distutils.ccompiler.simple_version_match distutils.ccompiler.simple_version_match(pat='[-.\\d]+', ignore='', start='')[source] Simple matching of version numbers, for use in CCompiler and FCompiler. Parameters patstr, optional A regular expression matching version numbers. Default is r'[-.\d]+'. ignorestr, optional A regular expression matching patterns to skip. Default is '', in which case nothing is skipped. startstr, optional A regular expression matching the start of where to start looking for version numbers. Default is '', in which case searching is started at the beginning of the version string given to matcher. Returns matchercallable A function that is appropriate to use as the .version_match attribute of a CCompiler class. matcher takes a single parameter, a version string.
numpy.reference.generated.numpy.distutils.ccompiler.simple_version_match
numpy.distutils.ccompiler_opt.CCompilerOpt.cache_flush method distutils.ccompiler_opt.CCompilerOpt.cache_flush()[source] Force update the cache.
numpy.reference.generated.numpy.distutils.ccompiler_opt.ccompileropt.cache_flush
numpy.distutils.ccompiler_opt.CCompilerOpt.cc_normalize_flags method distutils.ccompiler_opt.CCompilerOpt.cc_normalize_flags(flags)[source] Remove the conflicts that caused due gathering implied features flags. Parameters ‘flags’ list, compiler flags flags should be sorted from the lowest to the highest interest. Returns list, filtered from any conflicts. Examples >>> self.cc_normalize_flags(['-march=armv8.2-a+fp16', '-march=armv8.2-a+dotprod']) ['armv8.2-a+fp16+dotprod'] >>> self.cc_normalize_flags( ['-msse', '-msse2', '-msse3', '-mssse3', '-msse4.1', '-msse4.2', '-mavx', '-march=core-avx2'] ) ['-march=core-avx2']
numpy.reference.generated.numpy.distutils.ccompiler_opt.ccompileropt.cc_normalize_flags
numpy.distutils.ccompiler_opt.CCompilerOpt.conf_features attribute distutils.ccompiler_opt.CCompilerOpt.conf_features = {'ASIMD': {'implies': 'NEON_FP16 NEON_VFPV4', 'implies_detect': False, 'interest': 4}, 'ASIMDDP': {'implies': 'ASIMD', 'interest': 6}, 'ASIMDFHM': {'implies': 'ASIMDHP', 'interest': 7}, 'ASIMDHP': {'implies': 'ASIMD', 'interest': 5}, 'AVX': {'headers': 'immintrin.h', 'implies': 'SSE42', 'implies_detect': False, 'interest': 8}, 'AVX2': {'implies': 'F16C', 'interest': 13}, 'AVX512CD': {'implies': 'AVX512F', 'interest': 21}, 'AVX512F': {'extra_checks': 'AVX512F_REDUCE', 'implies': 'FMA3 AVX2', 'implies_detect': False, 'interest': 20}, 'AVX512_CLX': {'detect': 'AVX512_CLX', 'group': 'AVX512VNNI', 'implies': 'AVX512_SKX', 'interest': 43}, 'AVX512_CNL': {'detect': 'AVX512_CNL', 'group': 'AVX512IFMA AVX512VBMI', 'implies': 'AVX512_SKX', 'implies_detect': False, 'interest': 44}, 'AVX512_ICL': {'detect': 'AVX512_ICL', 'group': 'AVX512VBMI2 AVX512BITALG AVX512VPOPCNTDQ', 'implies': 'AVX512_CLX AVX512_CNL', 'implies_detect': False, 'interest': 45}, 'AVX512_KNL': {'detect': 'AVX512_KNL', 'group': 'AVX512ER AVX512PF', 'implies': 'AVX512CD', 'implies_detect': False, 'interest': 40}, 'AVX512_KNM': {'detect': 'AVX512_KNM', 'group': 'AVX5124FMAPS AVX5124VNNIW AVX512VPOPCNTDQ', 'implies': 'AVX512_KNL', 'implies_detect': False, 'interest': 41}, 'AVX512_SKX': {'detect': 'AVX512_SKX', 'extra_checks': 'AVX512BW_MASK AVX512DQ_MASK', 'group': 'AVX512VL AVX512BW AVX512DQ', 'implies': 'AVX512CD', 'implies_detect': False, 'interest': 42}, 'F16C': {'implies': 'AVX', 'interest': 11}, 'FMA3': {'implies': 'F16C', 'interest': 12}, 'FMA4': {'headers': 'x86intrin.h', 'implies': 'AVX', 'interest': 10}, 'NEON': {'headers': 'arm_neon.h', 'interest': 1}, 'NEON_FP16': {'implies': 'NEON', 'interest': 2}, 'NEON_VFPV4': {'implies': 'NEON_FP16', 'interest': 3}, 'POPCNT': {'headers': 'popcntintrin.h', 'implies': 'SSE41', 'interest': 6}, 'SSE': {'headers': 'xmmintrin.h', 'implies': 'SSE2', 'interest': 1}, 'SSE2': {'headers': 'emmintrin.h', 'implies': 'SSE', 'interest': 2}, 'SSE3': {'headers': 'pmmintrin.h', 'implies': 'SSE2', 'interest': 3}, 'SSE41': {'headers': 'smmintrin.h', 'implies': 'SSSE3', 'interest': 5}, 'SSE42': {'implies': 'POPCNT', 'interest': 7}, 'SSSE3': {'headers': 'tmmintrin.h', 'implies': 'SSE3', 'interest': 4}, 'VSX': {'extra_checks': 'VSX_ASM', 'headers': 'altivec.h', 'interest': 1}, 'VSX2': {'implies': 'VSX', 'implies_detect': False, 'interest': 2}, 'VSX3': {'implies': 'VSX2', 'implies_detect': False, 'interest': 3}, 'XOP': {'headers': 'x86intrin.h', 'implies': 'AVX', 'interest': 9}}
numpy.reference.generated.numpy.distutils.ccompiler_opt.ccompileropt.conf_features
numpy.distutils.ccompiler_opt.CCompilerOpt.conf_features_partial method distutils.ccompiler_opt.CCompilerOpt.conf_features_partial()[source] Return a dictionary of supported CPU features by the platform, and accumulate the rest of undefined options in conf_features, the returned dict has same rules and notes in class attribute conf_features, also its override any options that been set in ‘conf_features’.
numpy.reference.generated.numpy.distutils.ccompiler_opt.ccompileropt.conf_features_partial
numpy.distutils.ccompiler_opt.CCompilerOpt.cpu_baseline_flags method distutils.ccompiler_opt.CCompilerOpt.cpu_baseline_flags()[source] Returns a list of final CPU baseline compiler flags
numpy.reference.generated.numpy.distutils.ccompiler_opt.ccompileropt.cpu_baseline_flags
numpy.distutils.ccompiler_opt.CCompilerOpt.cpu_baseline_names method distutils.ccompiler_opt.CCompilerOpt.cpu_baseline_names()[source] return a list of final CPU baseline feature names
numpy.reference.generated.numpy.distutils.ccompiler_opt.ccompileropt.cpu_baseline_names
numpy.distutils.ccompiler_opt.CCompilerOpt.cpu_dispatch_names method distutils.ccompiler_opt.CCompilerOpt.cpu_dispatch_names()[source] return a list of final CPU dispatch feature names
numpy.reference.generated.numpy.distutils.ccompiler_opt.ccompileropt.cpu_dispatch_names
numpy.distutils.ccompiler_opt.CCompilerOpt.dist_compile method distutils.ccompiler_opt.CCompilerOpt.dist_compile(sources, flags, ccompiler=None, **kwargs)[source] Wrap CCompiler.compile()
numpy.reference.generated.numpy.distutils.ccompiler_opt.ccompileropt.dist_compile
numpy.distutils.ccompiler_opt.CCompilerOpt.dist_info method distutils.ccompiler_opt.CCompilerOpt.dist_info()[source] Return a tuple containing info about (platform, compiler, extra_args), required by the abstract class ‘_CCompiler’ for discovering the platform environment. This is also used as a cache factor in order to detect any changes happening from outside.
numpy.reference.generated.numpy.distutils.ccompiler_opt.ccompileropt.dist_info
numpy.distutils.ccompiler_opt.CCompilerOpt.dist_test method distutils.ccompiler_opt.CCompilerOpt.dist_test(source, flags, macros=[])[source] Return True if ‘CCompiler.compile()’ able to compile a source file with certain flags.
numpy.reference.generated.numpy.distutils.ccompiler_opt.ccompileropt.dist_test
numpy.distutils.ccompiler_opt.CCompilerOpt.feature_ahead method distutils.ccompiler_opt.CCompilerOpt.feature_ahead(names)[source] Return list of features in ‘names’ after remove any implied features and keep the origins. Parameters ‘names’: sequence sequence of CPU feature names in uppercase. Returns list of CPU features sorted as-is ‘names’ Examples >>> self.feature_ahead(["SSE2", "SSE3", "SSE41"]) ["SSE41"] # assume AVX2 and FMA3 implies each other and AVX2 # is the highest interest >>> self.feature_ahead(["SSE2", "SSE3", "SSE41", "AVX2", "FMA3"]) ["AVX2"] # assume AVX2 and FMA3 don't implies each other >>> self.feature_ahead(["SSE2", "SSE3", "SSE41", "AVX2", "FMA3"]) ["AVX2", "FMA3"]
numpy.reference.generated.numpy.distutils.ccompiler_opt.ccompileropt.feature_ahead
numpy.distutils.ccompiler_opt.CCompilerOpt.feature_c_preprocessor method distutils.ccompiler_opt.CCompilerOpt.feature_c_preprocessor(feature_name, tabs=0)[source] Generate C preprocessor definitions and include headers of a CPU feature. Parameters ‘feature_name’: str CPU feature name in uppercase. ‘tabs’: int if > 0, align the generated strings to the right depend on number of tabs. Returns str, generated C preprocessor Examples >>> self.feature_c_preprocessor("SSE3") /** SSE3 **/ #define NPY_HAVE_SSE3 1 #include <pmmintrin.h>
numpy.reference.generated.numpy.distutils.ccompiler_opt.ccompileropt.feature_c_preprocessor
numpy.distutils.ccompiler_opt.CCompilerOpt.feature_detect method distutils.ccompiler_opt.CCompilerOpt.feature_detect(names)[source] Return a list of CPU features that required to be detected sorted from the lowest to highest interest.
numpy.reference.generated.numpy.distutils.ccompiler_opt.ccompileropt.feature_detect