Typed Memoryviews

Typed memoryviews allow efficient access to memory buffers, such as thoseunderlying NumPy arrays, without incurring any Python overhead.Memoryviews are similar to the current NumPy array buffer support(np.ndarray[np.float64_t, ndim=2]), butthey have more features and cleaner syntax.

Memoryviews are more general than the old NumPy array buffer support, becausethey can handle a wider variety of sources of array data. For example, they canhandle C arrays and the Cython array type (Cython arrays).

A memoryview can be used in any context (function parameters, module-level, cdefclass attribute, etc) and can be obtained from nearly any object thatexposes writable buffer through the PEP 3118 buffer interface.

Quickstart

If you are used to working with NumPy, the following examples should get youstarted with Cython memory views.

  1. from cython.view cimport array as cvarray
  2. import numpy as np
  3.  
  4. # Memoryview on a NumPy array
  5. narr = np.arange(27, dtype=np.dtype("i")).reshape((3, 3, 3))
  6. cdef int [:, :, :] narr_view = narr
  7.  
  8. # Memoryview on a C array
  9. cdef int carr[3][3][3]
  10. cdef int [:, :, :] carr_view = carr
  11.  
  12. # Memoryview on a Cython array
  13. cyarr = cvarray(shape=(3, 3, 3), itemsize=sizeof(int), format="i")
  14. cdef int [:, :, :] cyarr_view = cyarr
  15.  
  16. # Show the sum of all the arrays before altering it
  17. print("NumPy sum of the NumPy array before assignments: %s" % narr.sum())
  18.  
  19. # We can copy the values from one memoryview into another using a single
  20. # statement, by either indexing with ... or (NumPy-style) with a colon.
  21. carr_view[...] = narr_view
  22. cyarr_view[:] = narr_view
  23. # NumPy-style syntax for assigning a single value to all elements.
  24. narr_view[:, :, :] = 3
  25.  
  26. # Just to distinguish the arrays
  27. carr_view[0, 0, 0] = 100
  28. cyarr_view[0, 0, 0] = 1000
  29.  
  30. # Assigning into the memoryview on the NumPy array alters the latter
  31. print("NumPy sum of NumPy array after assignments: %s" % narr.sum())
  32.  
  33. # A function using a memoryview does not usually need the GIL
  34. cpdef int sum3d(int[:, :, :] arr) nogil:
  35. cdef size_t i, j, k, I, J, K
  36. cdef int total = 0
  37. I = arr.shape[0]
  38. J = arr.shape[1]
  39. K = arr.shape[2]
  40. for i in range(I):
  41. for j in range(J):
  42. for k in range(K):
  43. total += arr[i, j, k]
  44. return total
  45.  
  46. # A function accepting a memoryview knows how to use a NumPy array,
  47. # a C array, a Cython array...
  48. print("Memoryview sum of NumPy array is %s" % sum3d(narr))
  49. print("Memoryview sum of C array is %s" % sum3d(carr))
  50. print("Memoryview sum of Cython array is %s" % sum3d(cyarr))
  51. # ... and of course, a memoryview.
  52. print("Memoryview sum of C memoryview is %s" % sum3d(carr_view))

This code should give the following output:

  1. NumPy sum of the NumPy array before assignments: 351
  2. NumPy sum of NumPy array after assignments: 81
  3. Memoryview sum of NumPy array is 81
  4. Memoryview sum of C array is 451
  5. Memoryview sum of Cython array is 1351
  6. Memoryview sum of C memoryview is 451

Using memoryviews

Syntax

Memory views use Python slicing syntax in a similar way as NumPy.

To create a complete view on a one-dimensional int buffer:

  1. cdef int[:] view1D = exporting_object

A complete 3D view:

  1. cdef int[:,:,:] view3D = exporting_object

They also work conveniently as function arguments:

  1. def process_3d_buffer(int[:,:,:] view not None):
  2. ...

The not None declaration for the argument automatically rejectsNone values as input, which would otherwise be allowed. The reason whyNone is allowed by default is that it is conveniently used for returnarguments:

  1. import numpy as np
  2.  
  3. def process_buffer(int[:,:] input_view not None,
  4. int[:,:] output_view=None):
  5.  
  6. if output_view is None:
  7. # Creating a default view, e.g.
  8. output_view = np.empty_like(input_view)
  9.  
  10. # process 'input_view' into 'output_view'
  11. return output_view

Cython will reject incompatible buffers automatically, e.g. passing athree dimensional buffer into a function that requires a twodimensional buffer will raise a ValueError.

Indexing

In Cython, index access on memory views is automatically translatedinto memory addresses. The following code requests a two-dimensionalmemory view of C int typed items and indexes into it:

  1. cdef int[:,:] buf = exporting_object
  2.  
  3. print(buf[1,2])

Negative indices work as well, counting from the end of the respectivedimension:

  1. print(buf[-1,-2])

The following function loops over each dimension of a 2D array andadds 1 to each item:

  1. import numpy as np
  2.  
  3. def add_one(int[:,:] buf):
  4. for x in range(buf.shape[0]):
  5. for y in range(buf.shape[1]):
  6. buf[x, y] += 1
  7.  
  8. # exporting_object must be a Python object
  9. # implementing the buffer interface, e.g. a numpy array.
  10. exporting_object = np.zeros((10, 20), dtype=np.intc)
  11.  
  12. add_one(exporting_object)

Indexing and slicing can be done with or without the GIL. It basically workslike NumPy. If indices are specified for every dimension you will get an elementof the base type (e.g. int). Otherwise, you will get a new view. An Ellipsismeans you get consecutive slices for every unspecified dimension:

  1. import numpy as np
  2.  
  3. exporting_object = np.arange(0, 15 * 10 * 20, dtype=np.intc).reshape((15, 10, 20))
  4.  
  5. cdef int[:, :, :] my_view = exporting_object
  6.  
  7. # These are all equivalent
  8. my_view[10]
  9. my_view[10, :, :]
  10. my_view[10, ...]

Copying

Memory views can be copied in place:

  1. import numpy as np
  2.  
  3. cdef int[:, :, :] to_view, from_view
  4. to_view = np.empty((20, 15, 30), dtype=np.intc)
  5. from_view = np.ones((20, 15, 30), dtype=np.intc)
  6.  
  7. # copy the elements in from_view to to_view
  8. to_view[...] = from_view
  9. # or
  10. to_view[:] = from_view
  11. # or
  12. to_view[:, :, :] = from_view

They can also be copied with the copy() and copy_fortran() methods; seeC and Fortran contiguous copies.

Transposing

In most cases (see below), the memoryview can be transposed in the same way thatNumPy slices can be transposed:

  1. import numpy as np
  2.  
  3. array = np.arange(20, dtype=np.intc).reshape((2, 10))
  4.  
  5. cdef int[:, ::1] c_contig = array
  6. cdef int[::1, :] f_contig = c_contig.T

This gives a new, transposed, view on the data.

Transposing requires that all dimensions of the memoryview have adirect access memory layout (i.e., there are no indirections through pointers).See Specifying more general memory layouts for details.

Newaxis

As for NumPy, new axes can be introduced by indexing an array with None

  1. cdef double[:] myslice = np.linspace(0, 10, num=50)
  2.  
  3. # 2D array with shape (1, 50)
  4. myslice[None] # or
  5. myslice[None, :]
  6.  
  7. # 2D array with shape (50, 1)
  8. myslice[:, None]
  9.  
  10. # 3D array with shape (1, 10, 1)
  11. myslice[None, 10:-20:2, None]

One may mix new axis indexing with all other forms of indexing and slicing.See also an example.

Read-only views

Since Cython 0.28, the memoryview item type can be declared as const tosupport read-only buffers as input:

  1. import numpy as np
  2.  
  3. cdef const double[:] myslice # const item type => read-only view
  4.  
  5. a = np.linspace(0, 10, num=50)
  6. a.setflags(write=False)
  7. myslice = a

Using a non-const memoryview with a binary Python string produces a runtime error.You can solve this issue with a const memoryview:

  1. cdef bint is_y_in(const unsigned char[:] string_view):
  2. cdef int i
  3. for i in range(string_view.shape[0]):
  4. if string_view[i] == b'y':
  5. return True
  6. return False
  7.  
  8. print(is_y_in(b'hello world')) # False
  9. print(is_y_in(b'hello Cython')) # True

Note that this does not require the input buffer to be read-only:

  1. a = np.linspace(0, 10, num=50)
  2. myslice = a # read-only view of a writable buffer

Writable buffers are still accepted by const views, but read-onlybuffers are not accepted for non-const, writable views:

  1. cdef double[:] myslice # a normal read/write memory view
  2.  
  3. a = np.linspace(0, 10, num=50)
  4. a.setflags(write=False)
  5. myslice = a # ERROR: requesting writable memory view from read-only buffer!

Comparison to the old buffer support

You will probably prefer memoryviews to the older syntax because:

  • The syntax is cleaner
  • Memoryviews do not usually need the GIL (see Memoryviews and the GIL)
  • Memoryviews are considerably faster

For example, this is the old syntax equivalent of the sum3d function above:

  1. cpdef int old_sum3d(object[int, ndim=3, mode='strided'] arr):
  2. cdef int I, J, K, total = 0
  3. I = arr.shape[0]
  4. J = arr.shape[1]
  5. K = arr.shape[2]
  6. for i in range(I):
  7. for j in range(J):
  8. for k in range(K):
  9. total += arr[i, j, k]
  10. return total

Note that we can’t use nogil for the buffer version of the function as wecould for the memoryview version of sum3d above, because buffer objectsare Python objects. However, even if we don’t use nogil with thememoryview, it is significantly faster. This is a output from an IPythonsession after importing both versions:

  1. In [2]: import numpy as np
  2.  
  3. In [3]: arr = np.zeros((40, 40, 40), dtype=int)
  4.  
  5. In [4]: timeit -r15 old_sum3d(arr)
  6. 1000 loops, best of 15: 298 us per loop
  7.  
  8. In [5]: timeit -r15 sum3d(arr)
  9. 1000 loops, best of 15: 219 us per loop

Python buffer support

Cython memoryviews support nearly all objects exporting the interface of Pythonnew style buffers. This is the buffer interface described in PEP 3118.NumPy arrays support this interface, as do Cython arrays. The“nearly all” is because the Python buffer interface allows the elements in thedata array to themselves be pointers; Cython memoryviews do not yet supportthis.

Memory layout

The buffer interface allows objects to identify the underlying memory in avariety of ways. With the exception of pointers for data elements, Cythonmemoryviews support all Python new-type buffer layouts. It can be useful to knowor specify memory layout if the memory has to be in a particular format for anexternal routine, or for code optimization.

Background

The concepts are as follows: there is data access and data packing. Data accessmeans either direct (no pointer) or indirect (pointer). Data packing means yourdata may be contiguous or not contiguous in memory, and may use strides toidentify the jumps in memory consecutive indices need to take for each dimension.

NumPy arrays provide a good model of strided direct data access, so we’ll usethem for a refresher on the concepts of C and Fortran contiguous arrays, anddata strides.

Brief recap on C, Fortran and strided memory layouts

The simplest data layout might be a C contiguous array. This is the defaultlayout in NumPy and Cython arrays. C contiguous means that the array data iscontinuous in memory (see below) and that neighboring elements in the firstdimension of the array are furthest apart in memory, whereas neighboringelements in the last dimension are closest together. For example, in NumPy:

  1. In [2]: arr = np.array([['0', '1', '2'], ['3', '4', '5']], dtype='S1')

Here, arr[0, 0] and arr[0, 1] are one byte apart in memory, whereasarr[0, 0] and arr[1, 0] are 3 bytes apart. This leads us to the idea ofstrides. Each axis of the array has a stride length, which is the number ofbytes needed to go from one element on this axis to the next element. In thecase above, the strides for axes 0 and 1 will obviously be:

  1. In [3]: arr.strides
  2. Out[4]: (3, 1)

For a 3D C contiguous array:

  1. In [5]: c_contig = np.arange(24, dtype=np.int8).reshape((2,3,4))
  2. In [6] c_contig.strides
  3. Out[6]: (12, 4, 1)

A Fortran contiguous array has the opposite memory ordering, with the elementson the first axis closest together in memory:

  1. In [7]: f_contig = np.array(c_contig, order='F')
  2. In [8]: np.all(f_contig == c_contig)
  3. Out[8]: True
  4. In [9]: f_contig.strides
  5. Out[9]: (1, 2, 6)

A contiguous array is one for which a single continuous block of memory containsall the data for the elements of the array, and therefore the memory blocklength is the product of number of elements in the array and the size of theelements in bytes. In the example above, the memory block is 2 3 4 * 1 byteslong, where 1 is the length of an int8.

An array can be contiguous without being C or Fortran order:

  1. In [10]: c_contig.transpose((1, 0, 2)).strides
  2. Out[10]: (4, 12, 1)

Slicing an NumPy array can easily make it not contiguous:

  1. In [11]: sliced = c_contig[:,1,:]
  2. In [12]: sliced.strides
  3. Out[12]: (12, 1)
  4. In [13]: sliced.flags
  5. Out[13]:
  6. C_CONTIGUOUS : False
  7. F_CONTIGUOUS : False
  8. OWNDATA : False
  9. WRITEABLE : True
  10. ALIGNED : True
  11. UPDATEIFCOPY : False

Default behavior for memoryview layouts

As you’ll see in Specifying more general memory layouts, you can specify memory layout forany dimension of an memoryview. For any dimension for which you don’t specify alayout, then the data access is assumed to be direct, and the data packingassumed to be strided. For example, that will be the assumption for memoryviewslike:

  1. int [:, :, :] my_memoryview = obj

C and Fortran contiguous memoryviews

You can specify C and Fortran contiguous layouts for the memoryview by using the::1 step syntax at definition. For example, if you know for sure yourmemoryview will be on top of a 3D C contiguous layout, you could write:

  1. cdef int[:, :, ::1] c_contiguous = c_contig

where c_contig could be a C contiguous NumPy array. The ::1 at the 3rdposition means that the elements in this 3rd dimension will be one element apartin memory. If you know you will have a 3D Fortran contiguous array:

  1. cdef int[::1, :, :] f_contiguous = f_contig

If you pass a non-contiguous buffer, for example

  1. # This array is C contiguous
  2. c_contig = np.arange(24).reshape((2,3,4))
  3. cdef int[:, :, ::1] c_contiguous = c_contig
  4.  
  5. # But this isn't
  6. c_contiguous = np.array(c_contig, order='F')

you will get a ValueError at runtime:

  1. /Users/mb312/dev_trees/minimal-cython/mincy.pyx in init mincy (mincy.c:17267)()
  2. 69
  3. 70 # But this isn't
  4. ---> 71 c_contiguous = np.array(c_contig, order='F')
  5. 72
  6. 73 # Show the sum of all the arrays before altering it
  7.  
  8. /Users/mb312/dev_trees/minimal-cython/stringsource in View.MemoryView.memoryview_cwrapper (mincy.c:9995)()
  9.  
  10. /Users/mb312/dev_trees/minimal-cython/stringsource in View.MemoryView.memoryview.__cinit__ (mincy.c:6799)()
  11.  
  12. ValueError: ndarray is not C-contiguous

Thus the ::1 in the slice type specification indicates in which dimension thedata is contiguous. It can only be used to specify full C or Fortrancontiguity.

C and Fortran contiguous copies

Copies can be made C or Fortran contiguous using the .copy() and.copy_fortran() methods:

  1. # This view is C contiguous
  2. cdef int[:, :, ::1] c_contiguous = myview.copy()
  3.  
  4. # This view is Fortran contiguous
  5. cdef int[::1, :] f_contiguous_slice = myview.copy_fortran()

Specifying more general memory layouts

Data layout can be specified using the previously seen ::1 slice syntax, orby using any of the constants in cython.view. If no specifier is given inany dimension, then the data access is assumed to be direct, and the datapacking assumed to be strided. If you don’t know whether a dimension will bedirect or indirect (because you’re getting an object with a buffer interfacefrom some library perhaps), then you can specify the generic flag, in whichcase it will be determined at runtime.

The flags are as follows:

  • generic - strided and direct or indirect
  • strided - strided and direct (this is the default)
  • indirect - strided and indirect
  • contiguous - contiguous and direct
  • indirect_contiguous - the list of pointers is contiguous

and they can be used like this:

  1. from cython cimport view
  2.  
  3. # direct access in both dimensions, strided in the first dimension, contiguous in the last
  4. cdef int[:, ::view.contiguous] a
  5.  
  6. # contiguous list of pointers to contiguous lists of ints
  7. cdef int[::view.indirect_contiguous, ::1] b
  8.  
  9. # direct or indirect in the first dimension, direct in the second dimension
  10. # strided in both dimensions
  11. cdef int[::view.generic, :] c

Only the first, last or the dimension following an indirect dimension may bespecified contiguous:

  1. from cython cimport view
  2.  
  3. # VALID
  4. cdef int[::view.indirect, ::1, :] a
  5. cdef int[::view.indirect, :, ::1] b
  6. cdef int[::view.indirect_contiguous, ::1, :] c
  1. # INVALID
  2. cdef int[::view.contiguous, ::view.indirect, :] d
  3. cdef int[::1, ::view.indirect, :] e

The difference between the contiguous flag and the ::1 specifier is that theformer specifies contiguity for only one dimension, whereas the latter specifiescontiguity for all following (Fortran) or preceding (C) dimensions:

  1. cdef int[:, ::1] c_contig = ...
  2.  
  3. # VALID
  4. cdef int[:, ::view.contiguous] myslice = c_contig[::2]
  5.  
  6. # INVALID
  7. cdef int[:, ::1] myslice = c_contig[::2]

The former case is valid because the last dimension remains contiguous, but thefirst dimension does not “follow” the last one anymore (meaning, it was stridedalready, but it is not C or Fortran contiguous any longer), since it was sliced.

Memoryviews and the GIL

As you will see from the Quickstart section, memoryviews often donot need the GIL:

  1. cpdef int sum3d(int[:, :, :] arr) nogil:
  2. ...

In particular, you do not need the GIL for memoryview indexing, slicing ortransposing. Memoryviews require the GIL for the copy methods(C and Fortran contiguous copies), or when the dtype is object and an objectelement is read or written.

Memoryview Objects and Cython Arrays

These typed memoryviews can be converted to Python memoryview objects(cython.view.memoryview). These Python objects are indexable, slicable andtransposable in the same way that the original memoryviews are. They can also beconverted back to Cython-space memoryviews at any time.

They have the following attributes:



  • shape: size in each dimension, as a tuple.

  • strides: stride along each dimension, in bytes.

  • suboffsets

  • ndim: number of dimensions.

  • size: total number of items in the view (product of the shape).

  • itemsize: size, in bytes, of the items in the view.

  • nbytes: equal to size times itemsize.

  • base


And of course the aforementioned T attribute (Transposing).These attributes have the same semantics as in NumPy. For instance, toretrieve the original object:

  1. import numpy
  2. cimport numpy as cnp
  3.  
  4. cdef cnp.int32_t[:] a = numpy.arange(10, dtype=numpy.int32)
  5. a = a[::2]
  6.  
  7. print(a)
  8. print(numpy.asarray(a))
  9. print(a.base)
  10.  
  11. # this prints:
  12. # <MemoryView of 'ndarray' object>
  13. # [0 2 4 6 8]
  14. # [0 1 2 3 4 5 6 7 8 9]

Note that this example returns the original object from which the view wasobtained, and that the view was resliced in the meantime.

Cython arrays

Whenever a Cython memoryview is copied (using any of the copy orcopy_fortran methods), you get a new memoryview slice of a newly createdcython.view.array object. This array can also be used manually, and willautomatically allocate a block of data. It can later be assigned to a C orFortran contiguous slice (or a strided slice). It can be used like:

  1. from cython cimport view
  2.  
  3. my_array = view.array(shape=(10, 2), itemsize=sizeof(int), format="i")
  4. cdef int[:, :] my_slice = my_array

It also takes an optional argument mode (‘c’ or ‘fortran’) and a booleanallocate_buffer, that indicates whether a buffer should be allocated and freedwhen it goes out of scope:

  1. cdef view.array my_array = view.array(..., mode="fortran", allocate_buffer=False)
  2. my_array.data = <char *> my_data_pointer
  3.  
  4. # define a function that can deallocate the data (if needed)
  5. my_array.callback_free_data = free

You can also cast pointers to array, or C arrays to arrays:

  1. cdef view.array my_array = <int[:10, :2]> my_data_pointer
  2. cdef view.array my_array = <int[:, :]> my_c_array

Of course, you can also immediately assign a cython.view.array to a typed memoryview slice. A C arraymay be assigned directly to a memoryview slice:

  1. cdef int[:, ::1] myslice = my_2d_c_array

The arrays are indexable and slicable from Python space just like memoryview objects, and have the sameattributes as memoryview objects.

CPython array module

An alternative to cython.view.array is the array module in thePython standard library. In Python 3, the array.array type supportsthe buffer interface natively, so memoryviews work on top of it withoutadditional setup.

Starting with Cython 0.17, however, it is possible to use these arraysas buffer providers also in Python 2. This is done through explicitlycimporting the cpython.array module as follows:

  1. cimport cpython.array
  2.  
  3. def sum_array(int[:] view):
  4. """
  5. >>> from array import array
  6. >>> sum_array( array('i', [1,2,3]) )
  7. 6
  8. """
  9. cdef int total
  10. for i in range(view.shape[0]):
  11. total += view[i]
  12. return total

Note that the cimport also enables the old buffer syntax for the arraytype. Therefore, the following also works:

  1. from cpython cimport array
  2.  
  3. def sum_array(array.array[int] arr): # using old buffer syntax
  4. ...

Coercion to NumPy

Memoryview (and array) objects can be coerced to a NumPy ndarray, without havingto copy the data. You can e.g. do:

  1. cimport numpy as np
  2. import numpy as np
  3.  
  4. numpy_array = np.asarray(<np.int32_t[:10, :10]> my_pointer)

Of course, you are not restricted to using NumPy’s type (such as np.int32_there), you can use any usable type.

None Slices

Although memoryview slices are not objects they can be set to None and they canbe checked for being None as well:

  1. def func(double[:] myarray = None):
  2. print(myarray is None)

If the function requires real memory views as input, it is therefore best toreject None input straight away in the signature, which is supported in Cython0.17 and later as follows:

  1. def func(double[:] myarray not None):
  2. ...

Unlike object attributes of extension classes, memoryview slices are notinitialized to None.

Pass data from a C function via pointer

Since use of pointers in C is ubiquitous, here we give a quick example of howto call C functions whose arguments contain pointers. Let’s suppose you want tomanage an array (allocate and deallocate) with NumPy (it can also be Python arrays, oranything that supports the buffer interface), but you want to perform computation on thisarray with an external C function implemented in C_func_file.c:




  1. 1
    2
    3
    4
    5
    6
    7
    8
    9





  1. #include "C_func_file.h"

    void multiply_by_10_in_C(double arr[], unsigned int n)
    {
    unsigned int i;
    for (i = 0; i < n; i++) {
    arr[i] *= 10;
    }
    }


This file comes with a header file called C_func_file.h containing:




  1. 1
    2
    3
    4
    5
    6





  1. #ifndef C_FUNC_FILE_H
    #define C_FUNC_FILE_H

    void multiply_by_10_in_C(double arr[], unsigned int n);

    #endif


where arr points to the array and n is its size.

You can call the function in a Cython file in the following way:




  1. 1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28





  1. cdef extern from "C_func_file.c":
    # C is include here so that it doesn't need to be compiled externally
    pass

    cdef extern from "C_func_file.h":
    void multiply_by_10_in_C(double *, unsigned int)

    import numpy as np

    def multiply_by_10(arr): # 'arr' is a one-dimensional numpy array

    if not arr.flags['C_CONTIGUOUS']:
    arr = np.ascontiguousarray(arr) # Makes a contiguous copy of the numpy array.

    cdef double[::1] arr_memview = arr

    multiply_by_10_in_C(&arr_memview[0], arr_memview.shape[0])

    return arr


    a = np.ones(5, dtype=np.double)
    print(multiply_by_10(a))

    b = np.ones(10, dtype=np.double)
    b = b[::2] # b is not contiguous.

    print(multiply_by_10(b)) # but our function still works as expected.


  • Several things to note:
    • ::1 requests a C contiguous view, and fails if the buffer is not C contiguous.See C and Fortran contiguous memoryviews.
    • &arr_memview[0] can be understood as ‘the address of the first element of thememoryview’. For contiguous arrays, this is equivalent to thestart address of the flat memory buffer.
    • arr_memview.shape[0] could have been replaced by arr_memview.size,arr.shape[0] or arr.size. But arr_memview.shape[0] is more efficientbecause it doesn’t require any Python interaction.
    • multiply_by_10 will perform computation in-place if the array passed is contiguous,and will return a new numpy array if arr is not contiguous.
    • If you are using Python arrays instead of numpy arrays, you don’t need to checkif the data is stored contiguously as this is always the case. See Working with Python arrays.

This way, you can call the C function similar to a normal Python function,and leave all the memory management and cleanup to NumPy arrays and Python’sobject handling. For the details of how to compile andcall functions in C files, see Using C libraries.