repo
stringclasses 856
values | pull_number
int64 3
127k
| instance_id
stringlengths 12
58
| issue_numbers
sequencelengths 1
5
| base_commit
stringlengths 40
40
| patch
stringlengths 67
1.54M
| test_patch
stringlengths 0
107M
| problem_statement
stringlengths 3
307k
| hints_text
stringlengths 0
908k
| created_at
timestamp[s] |
---|---|---|---|---|---|---|---|---|---|
pyodide/pyodide | 127 | pyodide__pyodide-127 | [
"126"
] | 56ef3a09605a0aff92d3d1923ec1197173fbc3cb | diff --git a/tools/buildpkg.py b/tools/buildpkg.py
--- a/tools/buildpkg.py
+++ b/tools/buildpkg.py
@@ -135,17 +135,17 @@ def compile(path, srcpath, pkg, args):
def package_files(buildpath, srcpath, pkg, args):
- if (buildpath / '.pacakaged').is_file():
+ if (buildpath / '.packaged').is_file():
return
name = pkg['package']['name']
- libdir = get_libdir(srcpath, args)
+ install_prefix = (srcpath / 'install').resolve()
subprocess.run([
'python',
Path(os.environ['EMSCRIPTEN']) / 'tools' / 'file_packager.py',
name + '.data',
'--preload',
- '{}@/lib/python3.6/site-packages'.format(libdir),
+ '{}@/'.format(install_prefix),
'--js-output={}'.format(name + '.js'),
'--export-name=pyodide',
'--exclude', '*.wasm.pre',
| Numpy package does't include test files
When examining the virtual filesystem in the built numpy package, the python tests files don't seem to be included (even when they would be in a regular CPython install).
For instance, using the `selenium` fixture from tests,
```py
selenium.load_package('numpy')
selenium.run('from pathlib import Path')
selenium.run('import os')
selenium.run('import numpy as np')
selenium.run('base_dir = Path(np.__file__).parent')
selenium.run('print(base_dir)')
selenium.run('print(list(sorted(os.listdir(base_dir))))')
```
produces,
```py
/lib/python3.6/site-packages/numpy
['__config__.py', '__init__.py', '_distributor_init.py', '_globals.py', '_import_tools.py', 'add_newdocs.py', 'compat', 'conftest.py', 'core', 'ctypeslib.py', 'distutils', 'doc', 'dual.py', 'f2py', 'fft', 'lib', 'linalg', 'ma', 'matlib.py', 'matrixlib', 'polynomial', 'random', 'setup.py', 'testing', 'version.py']
```
i.e. the `tests` folder is not included, even if it is included in `.zip`,
```sh
$ unzip numpy-1.14.1.zip
$ ls numpy-1.14.1/numpy
__init__.py _globals.py compat ctypeslib.py dual.py lib matlib.py random tests
_build_utils _import_tools.py conftest.py distutils f2py linalg matrixlib setup.py version.py
_distributor_init.py add_newdocs.py core doc fft ma polynomial testing
```
and in a pip installed numpy from the same `.zip`
```sh
$ ls ~/.miniconda3/envs/test23/lib/python3.6/site-packages/numpy/
LICENSE.txt __pycache__ _import_tools.py conftest.py distutils f2py linalg matrixlib setup.py version.py
__config__.py _distributor_init.py add_newdocs.py core doc fft ma polynomial testing
__init__.py _globals.py compat ctypeslib.py dual.py lib matlib.py random tests
```
This prevents running the corresponding tests with pytest https://github.com/iodide-project/pyodide/pull/95
| I think I see the issue with installed tests. pyodide takes the contents of `build/lib.../` in the build directory as the installed contents, but it looks like data files (as opposed to code files) are not deposited there by distutils. I think this might be numpy-specific, since data files in matplotlib, for example, are definitely there (or it wouldn't work at all). I'm assigning myself to this one because I have some thoughts about things to try. | 2018-08-23T16:05:53 |
|
pyodide/pyodide | 293 | pyodide__pyodide-293 | [
"278"
] | 75ed8c051af175ac356d8bd903546819d66aae43 | diff --git a/tools/file_packager.py b/tools/file_packager.py
--- a/tools/file_packager.py
+++ b/tools/file_packager.py
@@ -1,6 +1,12 @@
# flake8: noqa
-# This is forked from emscripten 1.38.10
+# This is forked from emscripten 1.38.22, with the original copyright notice
+# below.
+
+# Copyright 2012 The Emscripten Authors. All rights reserved.
+# Emscripten is available under two separate licenses, the MIT license and the
+# University of Illinois/NCSA Open Source License. Both these licenses can be
+# found in the LICENSE file.
'''
A tool that generates FS API calls to generate a filesystem, and packages the files
@@ -59,19 +65,19 @@
'''
from __future__ import print_function
-import os, sys, shutil, random, uuid, ctypes
-
-sys.path.insert(
- 1,
- os.path.join(
- os.path.dirname(
- os.path.dirname(
- os.path.abspath(__file__)
- )
- ),
- 'emsdk', 'emsdk', 'emscripten', 'tag-1.38.12'
- )
+import os
+import sys
+import shutil
+import random
+import uuid
+import ctypes
+
+emscripten_dir = os.path.join(
+ os.path.dirname(os.path.dirname(os.path.abspath(__file__))),
+ 'emsdk', 'emsdk', 'emscripten'
)
+tag_dir = sorted(os.listdir(emscripten_dir), key=lambda x: len(x))[0]
+sys.path.insert(1, os.path.join(emscripten_dir, tag_dir))
from tools.toolchain_profiler import ToolchainProfiler
if __name__ == '__main__':
@@ -79,9 +85,8 @@
import posixpath
from tools import shared
-from tools.shared import suffix, unsuffixed
from tools.jsrun import run_js
-from subprocess import Popen, PIPE, STDOUT
+from subprocess import PIPE
import fnmatch
import json
@@ -96,11 +101,13 @@
IMAGE_SUFFIXES = ('.jpg', '.png', '.bmp')
AUDIO_SUFFIXES = ('.ogg', '.wav', '.mp3')
-AUDIO_MIMETYPES = { 'ogg': 'audio/ogg', 'wav': 'audio/wav', 'mp3': 'audio/mpeg' }
+AUDIO_MIMETYPES = {'ogg': 'audio/ogg', 'wav': 'audio/wav', 'mp3': 'audio/mpeg'}
DDS_HEADER_SIZE = 128
-AV_WORKAROUND = 0 # Set to 1 to randomize file order and add some padding, to work around silly av false positives
+# Set to 1 to randomize file order and add some padding,
+# to work around silly av false positives
+AV_WORKAROUND = 0
data_files = []
excluded_patterns = []
@@ -112,16 +119,22 @@
jsoutput = None
from_emcc = False
force = True
-# If set to True, IndexedDB (IDBFS in library_idbfs.js) is used to locally cache VFS XHR so that subsequent
-# page loads can read the data from the offline cache instead.
+# If set to True, IndexedDB (IDBFS in library_idbfs.js) is used to locally
+# cache VFS XHR so that subsequent page loads can read the data from the
+# offline cache instead.
use_preload_cache = False
indexeddb_name = 'EM_PRELOAD_CACHE'
-# If set to True, the blob received from XHR is moved to the Emscripten HEAP, optimizing for mmap() performance.
-# If set to False, the XHR blob is kept intact, and fread()s etc. are performed directly to that data. This optimizes for minimal memory usage and fread() performance.
+# If set to True, the blob received from XHR is moved to the Emscripten HEAP,
+# optimizing for mmap() performance.
+# If set to False, the XHR blob is kept intact, and fread()s etc. are performed
+# directly to that data. This optimizes for minimal memory usage and fread()
+# performance.
no_heap_copy = True
-# If set to True, the package metadata is stored separately from js-output file which makes js-output file immutable to the package content changes.
-# If set to False, the package metadata is stored inside the js-output file which makes js-output file to mutate on each invocation of this packager tool.
-separate_metadata = False
+# If set to True, the package metadata is stored separately from js-output
+# file which makes js-output file immutable to the package content changes.
+# If set to False, the package metadata is stored inside the js-output file
+# which makes js-output file to mutate on each invocation of this packager tool.
+separate_metadata = False
lz4 = False
use_preload_plugins = False
@@ -170,16 +183,22 @@
leading = ''
elif leading == 'preload' or leading == 'embed':
mode = leading
- at_position = arg.replace('@@', '__').find('@') # position of @ if we're doing 'src@dst'. '__' is used to keep the index same with the original if they escaped with '@@'.
- uses_at_notation = (at_position != -1) # '@@' in input string means there is an actual @ character, a single '@' means the 'src@dst' notation.
+ # position of @ if we're doing 'src@dst'. '__' is used to keep the index
+ # same with the original if they escaped with '@@'.
+ at_position = arg.replace('@@', '__').find('@')
+ # '@@' in input string means there is an actual @ character, a single '@'
+ # means the 'src@dst' notation.
+ uses_at_notation = (at_position != -1)
if uses_at_notation:
srcpath = arg[0:at_position].replace('@@', '@') # split around the @
- dstpath = arg[at_position+1:].replace('@@', '@')
+ dstpath = arg[at_position + 1:].replace('@@', '@')
else:
- srcpath = dstpath = arg.replace('@@', '@') # Use source path as destination path.
+ # Use source path as destination path.
+ srcpath = dstpath = arg.replace('@@', '@')
if os.path.isfile(srcpath) or os.path.isdir(srcpath):
- data_files.append({ 'srcpath': srcpath, 'dstpath': dstpath, 'mode': mode, 'explicit_dst_path': uses_at_notation })
+ data_files.append({'srcpath': srcpath, 'dstpath': dstpath, 'mode': mode,
+ 'explicit_dst_path': uses_at_notation})
else:
print('Warning: ' + arg + ' does not exist, ignoring.', file=sys.stderr)
elif leading == 'exclude':
@@ -190,14 +209,19 @@
if (not force) and len(data_files) == 0:
has_preloaded = False
-if not has_preloaded or jsoutput == None:
- assert not separate_metadata, 'cannot separate-metadata without both --preloaded files and a specified --js-output'
+if not has_preloaded or jsoutput is None:
+ assert not separate_metadata, (
+ 'cannot separate-metadata without both --preloaded files '
+ 'and a specified --js-output')
if not from_emcc:
- print('Remember to build the main file with -s FORCE_FILESYSTEM=1 so that it includes support for loading this file package', file=sys.stderr)
+ print('Remember to build the main file with -s FORCE_FILESYSTEM=1 '
+ 'so that it includes support for loading this file package',
+ file=sys.stderr)
ret = ''
-# emcc.py will add this to the output itself, so it is only needed for standalone calls
+# emcc.py will add this to the output itself, so it is only needed for
+# standalone calls
if not from_emcc:
ret = '''
var Module = typeof %(EXPORT_NAME)s !== 'undefined' ? %(EXPORT_NAME)s : {};
@@ -219,22 +243,26 @@
}
'''
-# Win32 code to test whether the given file has the hidden property set.
+
def has_hidden_attribute(filepath):
+ """Win32 code to test whether the given file has the hidden property set."""
+
if sys.platform != 'win32':
return False
try:
- attrs = ctypes.windll.kernel32.GetFileAttributesW(unicode(filepath))
+ attrs = ctypes.windll.kernel32.GetFileAttributesW(
+ u'%s' % filepath)
assert attrs != -1
result = bool(attrs & 2)
- except:
+ except Exception:
result = False
return result
-# The packager should never preload/embed files if the file is hidden (Win32).
-# or it matches any pattern specified in --exclude
+
def should_ignore(fullname):
+ """The packager should never preload/embed files if the file
+ is hidden (Win32) or it matches any pattern specified in --exclude"""
if has_hidden_attribute(fullname):
return True
@@ -243,10 +271,15 @@ def should_ignore(fullname):
return True
return False
-# Expand directories into individual files
+
def add(mode, rootpathsrc, rootpathdst):
- # rootpathsrc: The path name of the root directory on the local FS we are adding to emscripten virtual FS.
- # rootpathdst: The name we want to make the source path available on the emscripten virtual FS.
+ """Expand directories into individual files
+
+ rootpathsrc: The path name of the root directory on the local FS we are
+ adding to emscripten virtual FS.
+ rootpathdst: The name we want to make the source path available on the
+ emscripten virtual FS.
+ """
for dirpath, dirnames, filenames in os.walk(rootpathsrc):
new_dirnames = []
for name in dirnames:
@@ -254,17 +287,23 @@ def add(mode, rootpathsrc, rootpathdst):
if not should_ignore(fullname):
new_dirnames.append(name)
elif DEBUG:
- print('Skipping directory "' + fullname + '" from inclusion in the emscripten virtual file system.', file=sys.stderr)
+ print('Skipping directory "%s" from inclusion in the emscripten '
+ 'virtual file system.' % fullname, file=sys.stderr)
for name in filenames:
fullname = os.path.join(dirpath, name)
if not should_ignore(fullname):
- dstpath = os.path.join(rootpathdst, os.path.relpath(fullname, rootpathsrc)) # Convert source filename relative to root directory of target FS.
- new_data_files.append({ 'srcpath': fullname, 'dstpath': dstpath, 'mode': mode, 'explicit_dst_path': True })
+ # Convert source filename relative to root directory of target FS.
+ dstpath = os.path.join(rootpathdst,
+ os.path.relpath(fullname, rootpathsrc))
+ new_data_files.append({'srcpath': fullname, 'dstpath': dstpath,
+ 'mode': mode, 'explicit_dst_path': True})
elif DEBUG:
- print('Skipping file "' + fullname + '" from inclusion in the emscripten virtual file system.', file=sys.stderr)
+ print('Skipping file "%s" from inclusion in the emscripten '
+ 'virtual file system.' % fullname, file=sys.stderr)
del dirnames[:]
dirnames.extend(new_dirnames)
+
new_data_files = []
for file_ in data_files:
if not should_ignore(file_['srcpath']):
@@ -272,43 +311,67 @@ def add(mode, rootpathsrc, rootpathdst):
add(file_['mode'], file_['srcpath'], file_['dstpath'])
else:
new_data_files.append(file_)
-data_files = [file_ for file_ in new_data_files if not os.path.isdir(file_['srcpath'])]
+data_files = [file_ for file_ in new_data_files
+ if not os.path.isdir(file_['srcpath'])]
if len(data_files) == 0:
print('Nothing to do!', file=sys.stderr)
sys.exit(1)
# Absolutize paths, and check that they make sense
-curr_abspath = os.path.abspath(os.getcwd()) # os.getcwd() always returns the hard path with any symbolic links resolved, even if we cd'd into a symbolic link.
+# os.getcwd() always returns the hard path with any symbolic links resolved,
+# even if we cd'd into a symbolic link.
+curr_abspath = os.path.abspath(os.getcwd())
for file_ in data_files:
if not file_['explicit_dst_path']:
- # This file was not defined with src@dst, so we inferred the destination from the source. In that case,
- # we require that the destination not be under the current location
+ # This file was not defined with src@dst, so we inferred the destination
+ # from the source. In that case, we require that the destination not be
+ # under the current location
path = file_['dstpath']
- abspath = os.path.realpath(os.path.abspath(path)) # Use os.path.realpath to resolve any symbolic links to hard paths, to match the structure in curr_abspath.
- if DEBUG: print(path, abspath, curr_abspath, file=sys.stderr)
+ # Use os.path.realpath to resolve any symbolic links to hard paths,
+ # to match the structure in curr_abspath.
+ abspath = os.path.realpath(os.path.abspath(path))
+ if DEBUG:
+ print(path, abspath, curr_abspath, file=sys.stderr)
if not abspath.startswith(curr_abspath):
- print('Error: Embedding "%s" which is below the current directory "%s". This is invalid since the current directory becomes the root that the generated code will see' % (path, curr_abspath), file=sys.stderr)
+ print('Error: Embedding "%s" which is below the current directory '
+ '"%s". This is invalid since the current directory becomes the '
+ 'root that the generated code will see' % (path, curr_abspath),
+ file=sys.stderr)
sys.exit(1)
- file_['dstpath'] = abspath[len(curr_abspath)+1:]
+ file_['dstpath'] = abspath[len(curr_abspath) + 1:]
if os.path.isabs(path):
- print('Warning: Embedding an absolute file/directory name "' + path + '" to the virtual filesystem. The file will be made available in the relative path "' + file_['dstpath'] + '". You can use the explicit syntax --preload-file srcpath@dstpath to explicitly specify the target location the absolute source path should be directed to.', file=sys.stderr)
+ print('Warning: Embedding an absolute file/directory name "%s" to the '
+ 'virtual filesystem. The file will be made available in the '
+ 'relative path "%s". You can use the explicit syntax '
+ '--preload-file srcpath@dstpath to explicitly specify the target '
+ 'location the absolute source path should be directed to.'
+ % (path, file_['dstpath']), file=sys.stderr)
for file_ in data_files:
- file_['dstpath'] = file_['dstpath'].replace(os.path.sep, '/') # name in the filesystem, native and emulated
- if file_['dstpath'].endswith('/'): # If user has submitted a directory name as the destination but omitted the destination filename, use the filename from source file
+ # name in the filesystem, native and emulated
+ file_['dstpath'] = file_['dstpath'].replace(os.path.sep, '/')
+ # If user has submitted a directory name as the destination but omitted
+ # the destination filename, use the filename from source file
+ if file_['dstpath'].endswith('/'):
file_['dstpath'] = file_['dstpath'] + os.path.basename(file_['srcpath'])
# make destination path always relative to the root
file_['dstpath'] = posixpath.normpath(os.path.join('/', file_['dstpath']))
if DEBUG:
- print('Packaging file "' + file_['srcpath'] + '" to VFS in path "' + file_['dstpath'] + '".', file=sys.stderr)
+ print('Packaging file "%s" to VFS in path "%s".'
+ % (file_['srcpath'], file_['dstpath']), file=sys.stderr)
# Remove duplicates (can occur naively, for example preload dir/, preload dir/subdir/)
seen = {}
+
+
def was_seen(name):
- if seen.get(name): return True
+ if seen.get(name):
+ return True
seen[name] = 1
return False
+
+
data_files = [file_ for file_ in data_files if not was_seen(file_['dstpath'])]
if AV_WORKAROUND:
@@ -329,27 +392,32 @@ def was_seen(name):
if dirname != '':
parts = dirname.split('/')
for i in range(len(parts)):
- partial = '/'.join(parts[:i+1])
+ partial = '/'.join(parts[:i + 1])
if partial not in partial_dirs:
- code += '''Module['FS_createPath']('/%s', '%s', true, true);\n''' % ('/'.join(parts[:i]), parts[i])
+ code += ('''Module['FS_createPath']('/%s', '%s', true, true);\n'''
+ % ('/'.join(parts[:i]), parts[i]))
partial_dirs.append(partial)
if has_preloaded:
- # Bundle all datafiles into one archive. Avoids doing lots of simultaneous XHRs which has overhead.
+ # Bundle all datafiles into one archive. Avoids doing lots of simultaneous
+ # XHRs which has overhead.
data = open(data_target, 'wb')
start = 0
for file_ in data_files:
file_['data_start'] = start
curr = open(file_['srcpath'], 'rb').read()
file_['data_end'] = start + len(curr)
- if AV_WORKAROUND: curr += '\x00'
- #print >> sys.stderr, 'bundling', file_['srcpath'], file_['dstpath'], file_['data_start'], file_['data_end']
+ if AV_WORKAROUND:
+ curr += '\x00'
start += len(curr)
data.write(curr)
data.close()
# TODO: sha256sum on data_target
- if start > 256*1024*1024:
- print('warning: file packager is creating an asset bundle of %d MB. this is very large, and browsers might have trouble loading it. see https://hacks.mozilla.org/2015/02/synchronous-execution-and-filesystem-access-in-emscripten/' % (start/(1024*1024)), file=sys.stderr)
+ if start > 256 * 1024 * 1024:
+ print('warning: file packager is creating an asset bundle of %d MB. '
+ 'this is very large, and browsers might have trouble loading it. '
+ 'see https://hacks.mozilla.org/2015/02/synchronous-execution-and-filesystem-access-in-emscripten/'
+ % (start / (1024 * 1024)), file=sys.stderr)
create_preloaded = '''
Module['FS_createPreloadedFile'](this.name, null, byteArray, true, true, function() {
@@ -367,7 +435,8 @@ def was_seen(name):
Module['removeRunDependency']('fp ' + that.name);
'''
- # Data requests - for getting a block of data out of the big archive - have a similar API to XHRs
+ # Data requests - for getting a block of data out of the big archive - have
+ # a similar API to XHRs
code += '''
function DataRequest(start, end, audio) {
this.start = start;
@@ -414,10 +483,12 @@ def was_seen(name):
chunk_size = 10240
start = 0
while start < len(data):
- parts.append('''fileData%d.push.apply(fileData%d, %s);\n''' % (counter, counter, str(data[start:start+chunk_size])))
+ parts.append('''fileData%d.push.apply(fileData%d, %s);\n'''
+ % (counter, counter, str(data[start:start + chunk_size])))
start += chunk_size
code += ''.join(parts)
- code += '''Module['FS_createDataFile']('%s', '%s', fileData%d, true, true, false);\n''' % (dirname, basename, counter)
+ code += ('''Module['FS_createDataFile']('%s', '%s', fileData%d, true, true, false);\n'''
+ % (dirname, basename, counter))
counter += 1
elif file_['mode'] == 'preload':
# Preload
@@ -439,7 +510,6 @@ def was_seen(name):
use_data = '''
// copy the entire loaded file into a spot in the heap. Files will refer to slices in that. They cannot be freed though
// (we may be allocating before malloc is ready, during startup).
- if (Module['SPLIT_MEMORY']) err('warning: you should run the file packager with --no-heap-copy when SPLIT_MEMORY is used, otherwise copying into the heap may fail due to the splitting');
var ptr = Module['getMemory'](byteArray.length);
Module['HEAPU8'].set(byteArray, ptr);
DataRequest.prototype.byteArray = Module['HEAPU8'].subarray(ptr, ptr+byteArray.length);
@@ -455,13 +525,17 @@ def was_seen(name):
DataRequest.prototype.requests[files[i].filename].onload();
}
'''
- use_data += " Module['removeRunDependency']('datafile_%s');\n" % shared.JS.escape_for_js_string(data_target)
+ use_data += (" Module['removeRunDependency']('datafile_%s');\n"
+ % shared.JS.escape_for_js_string(data_target))
else:
# LZ4FS usage
temp = data_target + '.orig'
shutil.move(data_target, temp)
- meta = run_js(shared.path_from_root('tools', 'lz4-compress.js'), shared.NODE_JS, [shared.path_from_root('src', 'mini-lz4.js'), temp, data_target], stdout=PIPE)
+ meta = run_js(shared.path_from_root('tools', 'lz4-compress.js'),
+ shared.NODE_JS,
+ [shared.path_from_root('src', 'mini-lz4.js'),
+ temp, data_target], stdout=PIPE)
os.unlink(temp)
use_data = '''
var compressedData = %s;
@@ -473,8 +547,7 @@ def was_seen(name):
package_uuid = uuid.uuid4()
package_name = data_target
- statinfo = os.stat(package_name)
- remote_package_size = statinfo.st_size
+ remote_package_size = os.path.getsize(package_name)
remote_package_name = os.path.basename(package_name)
ret += r'''
var PACKAGE_PATH;
@@ -493,7 +566,8 @@ def was_seen(name):
err('warning: you defined Module.locateFilePackage, that has been renamed to Module.locateFile (using your locateFilePackage for now)');
}
var REMOTE_PACKAGE_NAME = Module['locateFile'] ? Module['locateFile'](REMOTE_PACKAGE_BASE, '') : REMOTE_PACKAGE_BASE;
- ''' % (shared.JS.escape_for_js_string(data_target), shared.JS.escape_for_js_string(remote_package_name))
+ ''' % (shared.JS.escape_for_js_string(data_target),
+ shared.JS.escape_for_js_string(remote_package_name))
metadata['remote_package_size'] = remote_package_size
metadata['package_uuid'] = str(package_uuid)
ret += '''
@@ -538,59 +612,122 @@ def was_seen(name):
};
};
+ // This is needed as chromium has a limit on per-entry files in IndexedDB
+ // https://cs.chromium.org/chromium/src/content/renderer/indexed_db/webidbdatabase_impl.cc?type=cs&sq=package:chromium&g=0&l=177
+ // https://cs.chromium.org/chromium/src/out/Debug/gen/third_party/blink/public/mojom/indexeddb/indexeddb.mojom.h?type=cs&sq=package:chromium&g=0&l=60
+ // We set the chunk size to 64MB to stay well-below the limit
+ var CHUNK_SIZE = 64 * 1024 * 1024;
+
+ function cacheRemotePackage(
+ db,
+ packageName,
+ packageData,
+ packageMeta,
+ callback,
+ errback
+ ) {
+ var transactionPackages = db.transaction([PACKAGE_STORE_NAME], IDB_RW);
+ var packages = transactionPackages.objectStore(PACKAGE_STORE_NAME);
+ var chunkSliceStart = 0;
+ var nextChunkSliceStart = 0;
+ var chunkCount = Math.ceil(packageData.byteLength / CHUNK_SIZE);
+ var finishedChunks = 0;
+ for (var chunkId = 0; chunkId < chunkCount; chunkId++) {
+ nextChunkSliceStart += CHUNK_SIZE;
+ var putPackageRequest = packages.put(
+ packageData.slice(chunkSliceStart, nextChunkSliceStart),
+ 'package/' + packageName + '/' + chunkId
+ );
+ chunkSliceStart = nextChunkSliceStart;
+ putPackageRequest.onsuccess = function(event) {
+ finishedChunks++;
+ if (finishedChunks == chunkCount) {
+ var transaction_metadata = db.transaction(
+ [METADATA_STORE_NAME],
+ IDB_RW
+ );
+ var metadata = transaction_metadata.objectStore(METADATA_STORE_NAME);
+ var putMetadataRequest = metadata.put(
+ {
+ uuid: packageMeta.uuid,
+ chunkCount: chunkCount
+ },
+ 'metadata/' + packageName
+ );
+ putMetadataRequest.onsuccess = function(event) {
+ callback(packageData);
+ };
+ putMetadataRequest.onerror = function(error) {
+ errback(error);
+ };
+ }
+ };
+ putPackageRequest.onerror = function(error) {
+ errback(error);
+ };
+ }
+ }
+
/* Check if there's a cached package, and if so whether it's the latest available */
function checkCachedPackage(db, packageName, callback, errback) {
var transaction = db.transaction([METADATA_STORE_NAME], IDB_RO);
var metadata = transaction.objectStore(METADATA_STORE_NAME);
-
- var getRequest = metadata.get("metadata/" + packageName);
+ var getRequest = metadata.get('metadata/' + packageName);
getRequest.onsuccess = function(event) {
var result = event.target.result;
if (!result) {
- return callback(false);
+ return callback(false, null);
} else {
- return callback(PACKAGE_UUID === result.uuid);
+ return callback(PACKAGE_UUID === result.uuid, result);
}
};
getRequest.onerror = function(error) {
errback(error);
};
- };
+ }
- function fetchCachedPackage(db, packageName, callback, errback) {
+ function fetchCachedPackage(db, packageName, metadata, callback, errback) {
var transaction = db.transaction([PACKAGE_STORE_NAME], IDB_RO);
var packages = transaction.objectStore(PACKAGE_STORE_NAME);
- var getRequest = packages.get("package/" + packageName);
- getRequest.onsuccess = function(event) {
- var result = event.target.result;
- callback(result);
- };
- getRequest.onerror = function(error) {
- errback(error);
- };
- };
-
- function cacheRemotePackage(db, packageName, packageData, packageMeta, callback, errback) {
- var transaction_packages = db.transaction([PACKAGE_STORE_NAME], IDB_RW);
- var packages = transaction_packages.objectStore(PACKAGE_STORE_NAME);
-
- var putPackageRequest = packages.put(packageData, "package/" + packageName);
- putPackageRequest.onsuccess = function(event) {
- var transaction_metadata = db.transaction([METADATA_STORE_NAME], IDB_RW);
- var metadata = transaction_metadata.objectStore(METADATA_STORE_NAME);
- var putMetadataRequest = metadata.put(packageMeta, "metadata/" + packageName);
- putMetadataRequest.onsuccess = function(event) {
- callback(packageData);
+ var chunksDone = 0;
+ var totalSize = 0;
+ var chunks = new Array(metadata.chunkCount);
+
+ for (var chunkId = 0; chunkId < metadata.chunkCount; chunkId++) {
+ var getRequest = packages.get('package/' + packageName + '/' + chunkId);
+ getRequest.onsuccess = function(event) {
+ // If there's only 1 chunk, there's nothing to concatenate it with so we can just return it now
+ if (metadata.chunkCount == 1) {
+ callback(event.target.result);
+ } else {
+ chunksDone++;
+ totalSize += event.target.result.byteLength;
+ chunks.push(event.target.result);
+ if (chunksDone == metadata.chunkCount) {
+ if (chunksDone == 1) {
+ callback(event.target.result);
+ } else {
+ var tempTyped = new Uint8Array(totalSize);
+ var byteOffset = 0;
+ for (var chunkId in chunks) {
+ var buffer = chunks[chunkId];
+ tempTyped.set(new Uint8Array(buffer), byteOffset);
+ byteOffset += buffer.byteLength;
+ buffer = undefined;
+ }
+ chunks = undefined;
+ callback(tempTyped.buffer);
+ tempTyped = undefined;
+ }
+ }
+ }
};
- putMetadataRequest.onerror = function(error) {
+ getRequest.onerror = function(error) {
errback(error);
};
- };
- putPackageRequest.onerror = function(error) {
- errback(error);
- };
- };
+ }
+ }
'''
ret += r'''
@@ -657,7 +794,9 @@ def was_seen(name):
%s
};
Module['addRunDependency']('datafile_%s');
- ''' % (use_data, shared.JS.escape_for_js_string(data_target)) # use basename because from the browser's point of view, we need to find the datafile in the same dir as the html file
+ ''' % (use_data, shared.JS.escape_for_js_string(data_target))
+ # use basename because from the browser's point of view,
+ # we need to find the datafile in the same dir as the html file
code += r'''
if (!Module.preloadResults) Module.preloadResults = {};
@@ -674,11 +813,11 @@ def was_seen(name):
openDatabase(
function(db) {
checkCachedPackage(db, PACKAGE_PATH + PACKAGE_NAME,
- function(useCached) {
+ function(useCached, metadata) {
Module.preloadResults[PACKAGE_NAME] = {fromCache: useCached};
if (useCached) {
console.info('loading ' + PACKAGE_NAME + ' from cache');
- fetchCachedPackage(db, PACKAGE_PATH + PACKAGE_NAME, processPackageData, preloadFallback);
+ fetchCachedPackage(db, PACKAGE_PATH + PACKAGE_NAME, metadata, processPackageData, preloadFallback);
} else {
console.info('loading ' + PACKAGE_NAME + ' from remote');
fetchRemotePackage(REMOTE_PACKAGE_NAME, REMOTE_PACKAGE_SIZE,
@@ -699,8 +838,10 @@ def was_seen(name):
if (Module['setStatus']) Module['setStatus']('Downloading...');
'''
else:
- # Not using preload cache, so we might as well start the xhr ASAP, potentially before JS parsing of the main codebase if it's after us.
- # Only tricky bit is the fetch is async, but also when runWithFS is called is async, so we handle both orderings.
+ # Not using preload cache, so we might as well start the xhr ASAP,
+ # potentially before JS parsing of the main codebase if it's after us.
+ # Only tricky bit is the fetch is async, but also when runWithFS is called
+ # is async, so we handle both orderings.
ret += r'''
var fetchedCallback = null;
var fetched = Module['getPreloadedPackage'] ? Module['getPreloadedPackage'](REMOTE_PACKAGE_NAME, REMOTE_PACKAGE_SIZE) : null;
@@ -739,9 +880,8 @@ def was_seen(name):
}
'''
-ret += '''%s
-})();
-''' % ('''
+if separate_metadata:
+ _metadata_template = '''
Module['removeRunDependency']('%(metadata_file)s');
}
@@ -765,16 +905,26 @@ def was_seen(name):
if (!Module['preRun']) Module['preRun'] = [];
Module["preRun"].push(runMetaWithFS);
}
-''' % {'metadata_file': os.path.basename(jsoutput + '.metadata')} if separate_metadata else '''
+''' % {'metadata_file': os.path.basename(jsoutput + '.metadata')}
+
+else:
+ _metadata_template = '''
}
loadPackage(%s);
-''' % json.dumps(metadata))
+''' % json.dumps(metadata)
+
+ret += '''%s
+})();
+''' % _metadata_template
+
if force or len(data_files):
- if jsoutput == None:
+ if jsoutput is None:
print(ret)
else:
- # Overwrite the old jsoutput file (if exists) only when its content differs from the current generated one, otherwise leave the file untouched preserving its old timestamp
+ # Overwrite the old jsoutput file (if exists) only when its content
+ # differs from the current generated one, otherwise leave the file
+ # untouched preserving its old timestamp
if os.path.isfile(jsoutput):
f = open(jsoutput, 'r+')
old = f.read()
| Upgrade to emscripten 1.38.22
This will give us the DSO -> DSO linking from @navytux that should help us decrease the Scipy package size.
| 1.38.22 was released yesteday. `tools/file_packager.py` would also need to be synchronized..
Great. Thanks for the update. | 2019-01-10T12:40:37 |
|
pyodide/pyodide | 325 | pyodide__pyodide-325 | [
"316"
] | 63b6ba23e8844bc660e438e5aaf78b9f6c0d2d45 | diff --git a/src/pyodide.py b/src/pyodide.py
--- a/src/pyodide.py
+++ b/src/pyodide.py
@@ -67,4 +67,16 @@ def find_imports(code):
return list(imports)
-__all__ = ['open_url', 'eval_code', 'find_imports']
+def as_nested_list(obj):
+ """
+ Assumes a Javascript object is made of (possibly nested) arrays and
+ converts them to nested Python lists.
+ """
+ try:
+ it = iter(obj)
+ return [as_nested_list(x) for x in it]
+ except TypeError:
+ return obj
+
+
+__all__ = ['open_url', 'eval_code', 'find_imports', 'as_nested_list']
| ValueError: invalid __array_struct__ when using js arrays of arrays and numpy
When using a matrix (array of array of numbers) in javascript and trying to convert that to a numpy array, it fails with the error `ValueError: invalid __array_struct__`
To reproduce:
JavaScript:
```
window.A = [[1,2,3],[4,5,6]];
```
Python:
```
import numpy
from js import A
m = numpy.array(A)
```
| Interestingly, other libraries besides numpy do seem to be able to work with JavaScript matrices. For example, statistics does work with a JavaScript matrix.
JavaScript:
```
window.A = [[1,2,3],[4,5,6]];
```
Python:
```
import statistics
from js import A
statistics.mean(A[0])
```
After some more digging around in the code, it appears `__array_struct__` is a special `numpy` thing.
Looking in the code for the numpy package at `packages/numpy`, I see that it is always accessed using the function `PyArray_LookupSpecial_OnInstance`. When I look at that function, it has the interesting comment:
```
* Implements incorrect special method lookup rules, that break the python
* convention, and looks on the instance, not the type.
*
* Kept for backwards compatibility. In future, we should deprecate this.
```
This seems like it could be the root of the problem, but it is unclear to me what change should be made. Patching the implementation of `PyArray_LookupSpecial_OnInstance` to be that of `PyArray_LookupSpecial` might make it work for JavaScript objects, but could break it for native python objects. It's very hard to follow how this code works.
More searching the code reveals that the error must be produced at line 2241 of `packages/numpy/build/numpy-1.15.1/numpy/core/src/multiarray/ctors.c`. This means that the call to `PyArray_LookupSpecial_OnInstance` at line 2201 of `ctors.c` must be returning something which is not `NULL`, but is also not a valid `__array_struct__`. Still unclear to me how/why this is happening.
I can't reproduce this as of Pyodide 0.8.2 -- it's possible the changes to object coercion helped.
However, even given that, I wouldn't expect Numpy to be able to automatically convert this given how Javascript Arrays and Objects are kinda-sorta the same thing. I think the best we could expect is to require:
```
np.array([list(x) for x in A])
```
(and maybe provide as `aslist` helper function to handle nested lists). | 2019-02-25T13:35:10 |
|
pyodide/pyodide | 326 | pyodide__pyodide-326 | [
"315"
] | 63b6ba23e8844bc660e438e5aaf78b9f6c0d2d45 | diff --git a/benchmark/benchmarks/large_decimal_list.py b/benchmark/benchmarks/large_decimal_list.py
new file mode 100644
--- /dev/null
+++ b/benchmark/benchmarks/large_decimal_list.py
@@ -0,0 +1,6 @@
+# setup: from decimal import Decimal ; A = [Decimal('2.1') for i in range(1000)] ; B = [Decimal('3.2') for i in range(1000)] # noqa
+# run: large_decimal_list(A, B)
+
+
+def large_decimal_list(A, B):
+ return [a * b for a, b in zip(A, B)]
| Decimal is 450x slower than native
The Decimal data type performs 450 times slower in pyodide than the same code run natively.
To test this I replaced the benchmarks with two of my own. The first I called `large_list`, and it multiplies two 1000 item lists of floats. It runs a very reasonable 10x slower than native. The second test is identical to the first, except that instead of floats it uses Decimals. It runs a painful 450x slower than native.
Here is `large_list`:
```
# setup: A = [2.1 for i in range(1000)] ; B = [3.2 for i in range(1000)]
# run: large_list(A, B)
def large_list(A, B):
return [a*b for a,b in zip(A,B)]
```
Here is `large_decimal_list`:
```
# setup: from decimal import Decimal ; A = [Decimal('2.1') for i in range(1000)] ; B = [Decimal('3.2') for i in range(1000)]
# run: large_decimal_list(A, B)
def large_decimal_list(A, B):
return [a*b for a,b in zip(A,B)]
```
And here is the resulting `benchmarks.json` file:
```
{
"pystone": {"native": 0.456669, "firefox": 2.843, "chrome": 2.95},
"large_decimal_list": {"native": 0.004931997666264781, "firefox": 2.188, "chrome": 2.229016110777778},
"large_list": {"native": 0.002595901883776403, "firefox": 0.0214444444444444, "chrome": 0.025382222222222122}
}
```
When I profile similar code in the browser, I see that 95% of the total time (27% self time) is spent in one function (__PyEval_EvalFrameDefault), but I don't know how to proceed on debugging this further. I looked through the Decimal code at
pyodide/cpython/build/3.7.0/host/lib/python3.7/_pydecimal.py, and at the PyEval_EvalFrameDefault code at pyodide/cpython/build/3.7.0/host/Python-3.7.0/Python/ceval.c but nothing jumped out at me as a likely culprit.
| It looks like the C implementation of decimal isn't being built, so it's falling back to the Python implementation, which is understandably much slower.
The process would basically involve adding the appropriate incantations to `Setup.local`. | 2019-02-25T16:31:00 |
|
pyodide/pyodide | 337 | pyodide__pyodide-337 | [
"334"
] | f5140c8abda08cabd460c62c0a829ace6d37d503 | diff --git a/pyodide_build/buildpkg.py b/pyodide_build/buildpkg.py
--- a/pyodide_build/buildpkg.py
+++ b/pyodide_build/buildpkg.py
@@ -42,10 +42,14 @@ def check_checksum(path, pkg):
def download_and_extract(buildpath, packagedir, pkg, args):
tarballpath = buildpath / Path(pkg['source']['url']).name
if not tarballpath.is_file():
- subprocess.run([
- 'wget', '-q', '-O', str(tarballpath), pkg['source']['url']
- ], check=True)
- check_checksum(tarballpath, pkg)
+ try:
+ subprocess.run([
+ 'wget', '-q', '-O', str(tarballpath), pkg['source']['url']
+ ], check=True)
+ check_checksum(tarballpath, pkg)
+ except Exception:
+ tarballpath.unlink()
+ raise
srcpath = buildpath / packagedir
if not srcpath.is_dir():
shutil.unpack_archive(str(tarballpath), str(buildpath))
| buildpkg doesn't clean up after failed or interrupted downloads

when I run `make` command in docker environment, it shows this bug. I think it's the network that cause this error.
We can use `rm -rf /src/packages/numpy/build` to solve this problem
| slow network also cause another problem. When I only delete the file instead of the directory, the shell will show another problem as below. May be this will be helpful to others. After deleting the directory, the problem fixed
`shutil.ReadError: /src/packages/python-dateutil/build/python-dateutil-2.7.2.tar.gz is not a compressed or uncompressed tar file`
`Makefile:5: recipe for target 'all' failed`
`make[1]: *** [all] Error 1`
`make[1]: Leaving directory '/src/packages'`
`Makefile:225: recipe for target 'build/packages.json' failed`
`make: *** [build/packages.json] Error 2`
`root@0c055853209e:/src# rm /src/packages/python-dateutil/build/python-dateutil-2.7.2.tar.gz`
`root@0c055853209e:/src# make`
`make -C packages`
`make[1]: Entering directory '/src/packages'`
`# Install build dependencies`
`/src/cpython/build/3.7.0/host/bin/python3 -m pip install Cython Tempita`
`Requirement already satisfied: Cython in /src/cpython/build/3.7.0/host/lib/python3.7/site-packages (0.29.6)`
`Requirement already satisfied: Tempita in /src/cpython/build/3.7.0/host/lib/python3.7/site-packages (0.5.2)`
`You are using pip version 10.0.1, however version 19.0.3 is available.`
`You should consider upgrading via the 'pip install --upgrade pip' command.`
`../bin/pyodide buildall . ../build \`
` --package_abi=1 --ldflags="-O3 -s "BINARYEN_METHOD='native-wasm'" -Werror -s EMULATED_FUNCTION_POINTERS=1 -s EMULATE_FUNCTION_POINTER_CASTS=1 -s SIDE_MODULE=1 -s WASM=1 -s "BINARYEN_TRAP_MODE='clamp'" --memory-init-file 0 -s LINKABLE=1 -s EXPORT_ALL=1" --host=/src/cpython/build/3.7.0/host --target=/src/cpython/installs/python-3.7.0`
`BUILDING PACKAGE: python-dateutil`
`patching file dateutil/tz/tz.py`
`patching file dateutil/rrule.py`
`patch unexpectedly ends in middle of line`
`Hunk #2 succeeded at 107 with fuzz 1.`
`Download error on https://pypi.python.org/simple/setuptools_scm/: [Errno -3] Temporary failure in name resolution -- Some packages may not be found!`
`Download error on https://pypi.python.org/simple/setuptools-scm/: [Errno -3] Temporary failure in name resolution -- Some packages may not be found!`
`Couldn't find index page for 'setuptools_scm' (maybe misspelled?)`
`No local packages or working download links found for setuptools_scm`
`Traceback (most recent call last):`
` File "setup.py", line 86, in <module>`
` "test": Unsupported`
`File "/src/cpython/build/3.7.0/host/lib/python3.7/site-packages/setuptools/__init__.py", line 128, in setup`
` _install_setup_requires(attrs)`
`File "/src/cpython/build/3.7.0/host/lib/python3.7/site-packages/setuptools/__init__.py", line 123, in _install_setup_requires`
` dist.fetch_build_eggs(dist.setup_requires)`
`File "/src/cpython/build/3.7.0/host/lib/python3.7/site-packages/setuptools/dist.py", line 513, in fetch_build_eggs`
` replace_conflicting=True,`
` File "/src/cpython/build/3.7.0/host/lib/python3.7/site-packages/pkg_resources/__init__.py", line 774, in resolve`
` replace_conflicting=replace_conflicting`
` File "/src/cpython/build/3.7.0/host/lib/python3.7/site-packages/pkg_resources/__init__.py", line 1057, in best_match`
` return self.obtain(req, installer)`
` File "/src/cpython/build/3.7.0/host/lib/python3.7/site-packages/pkg_resources/__init__.py", line 1069, in obtain`
` return installer(requirement)`
` File "/src/cpython/build/3.7.0/host/lib/python3.7/site-packages/setuptools/dist.py", line 580, in fetch_build_egg`
` return cmd.easy_install(req)`
` File "/src/cpython/build/3.7.0/host/lib/python3.7/site-packages/setuptools/command/easy_install.py", line 667, in easy_install`
` raise DistutilsError(msg)`
`distutils.errors.DistutilsError: Could not find suitable distribution for Requirement.parse('setuptools_scm')`
`Traceback (most recent call last):`
` File "/usr/local/lib/python3.7/runpy.py", line 193, in _run_module_as_main`
` "__main__", mod_spec)`
` File "/usr/local/lib/python3.7/runpy.py", line 85, in _run_code`
` exec(code, run_globals)`
` File "/src/pyodide_build/__main__.py", line 30, in <module>`
` main()`
` File "/src/pyodide_build/__main__.py", line 24, in main`
` args.func(args)`
` File "/src/pyodide_build/buildall.py", line 93, in main`
` build_packages(packagesdir, outputdir, args)`
` File "/src/pyodide_build/buildall.py", line 50, in build_packages`
` build_package(pkgname, dependencies, packagesdir, outputdir, args)`
` File "/src/pyodide_build/buildall.py", line 21, in build_package`
` build_package(req, dependencies, packagesdir, outputdir, args)`
` File "/src/pyodide_build/buildall.py", line 21, in build_package
build_package(req, dependencies, packagesdir, outputdir, args)`
` File "/src/pyodide_build/buildall.py", line 24, in build_package`
` buildpkg.build_package(packagesdir / pkgname / 'meta.yaml', args)`
` File "/src/pyodide_build/buildpkg.py", line 162, in build_package`
` compile(path, srcpath, pkg, args)`
` File "/src/pyodide_build/buildpkg.py", line 100, in compile`
` '--target', args.target], env=env, check=True)`
` File "/usr/local/lib/python3.7/subprocess.py", line 468, in run`
` output=stdout, stderr=stderr)`
`subprocess.CalledProcessError: Command '['/src/cpython/build/3.7.0/host/bin/python3', '-m', 'pyodide_build', 'pywasmcross', '--cflags', ' ', '--ldflags', '-O3 -s BINARYEN_METHOD=native-wasm -Werror -s EMULATED_FUNCTION_POINTERS=1 -s EMULATE_FUNCTION_POINTER_CASTS=1 -s SIDE_MODULE=1 -s WASM=1 -s BINARYEN_TRAP_MODE=clamp --memory-init-file 0 -s LINKABLE=1 -s EXPORT_ALL=1 ', '--host', '/src/cpython/build/3.7.0/host', '--target', '/src/cpython/installs/python-3.7.0']' returned non-zero exit status 1.`
`Makefile:5: recipe for target 'all' failed`
`make[1]: *** [all] Error 1`
`make[1]: Leaving directory '/src/packages'`
`Makefile:225: recipe for target 'build/packages.json' failed`
`make: *** [build/packages.json] Error 2` | 2019-03-13T19:02:23 |
|
pyodide/pyodide | 404 | pyodide__pyodide-404 | [
"401"
] | 6e561d2b19ccfc7450260182a79399f88eab153d | diff --git a/packages/matplotlib/src/wasm_backend.py b/packages/matplotlib/src/wasm_backend.py
--- a/packages/matplotlib/src/wasm_backend.py
+++ b/packages/matplotlib/src/wasm_backend.py
@@ -103,10 +103,10 @@ def get_dpi_ratio(self, context):
def create_root_element(self):
# Designed to be overridden by subclasses for use in contexts other
# than iodide.
- from js import iodide
- if iodide is not None:
+ try:
+ from js import iodide
return iodide.output.element('div')
- else:
+ except ImportError:
return document.createElement('div')
def show(self):
| diff --git a/test/test_python.py b/test/test_python.py
--- a/test/test_python.py
+++ b/test/test_python.py
@@ -303,11 +303,18 @@ def test_typed_arrays(selenium, wasm_heap, jstype, pytype):
def test_import_js(selenium):
result = selenium.run(
"""
- from js import window
- window.title = 'Foo'
- window.title
+ import js
+ js.window.title = 'Foo'
+ js.window.title
""")
assert result == 'Foo'
+ result = selenium.run(
+ """
+ dir(js)
+ """)
+ assert len(result) > 100
+ assert 'document' in result
+ assert 'window' in result
def test_pyimport_multiple(selenium):
| A more reasonable error when importing js
I realize that only `from js import something` is supported, but a more reasonable error message for `import js` would be very helpful:
```
import js
Error: TypeError: object of type 'NoneType' has no len()
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/lib/python3.7/site-packages/pyodide.py", line 42, in eval_code
exec(compile(mod, '<exec>', mode='exec'), ns, ns)
File "<exec>", line 1, in <module>
SystemError: <JsImport object at 0x81c2e8> returned a result with an error set
```
| 2019-04-30T22:16:52 |
|
pyodide/pyodide | 474 | pyodide__pyodide-474 | [
"420"
] | 4c818996618c1106cc8d57971faff5fcb5930ad3 | diff --git a/docs/conf.py b/docs/conf.py
new file mode 100644
--- /dev/null
+++ b/docs/conf.py
@@ -0,0 +1,192 @@
+# -*- coding: utf-8 -*-
+#
+# Configuration file for the Sphinx documentation builder.
+#
+# This file does only contain a selection of the most common options. For a
+# full list see the documentation:
+# http://www.sphinx-doc.org/en/master/config
+
+# -- Path setup --------------------------------------------------------------
+
+# If extensions (or modules to document with autodoc) are in another directory,
+# add these directories to sys.path here. If the directory is relative to the
+# documentation root, use os.path.abspath to make it absolute, like shown here.
+#
+import os
+import sys
+
+sys.path.insert(0, os.path.abspath("."))
+sys.path.insert(0, os.path.abspath(".."))
+
+# -- Project information -----------------------------------------------------
+
+project = "Pyodide"
+copyright = "2019, Mozilla"
+author = "Mozilla"
+
+from src import pyodide
+
+# The full version, including alpha/beta/rc tags.
+release = version = pyodide.__version__
+
+
+# -- General configuration ---------------------------------------------------
+
+# If your documentation needs a minimal Sphinx version, state it here.
+#
+# needs_sphinx = '1.0'
+
+# Add any Sphinx extension module names here, as strings. They can be
+# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
+# ones.
+extensions = ["sphinx.ext.autodoc", "m2r"]
+
+# Add any paths that contain templates here, relative to this directory.
+templates_path = ["_templates"]
+
+# The suffix(es) of source filenames.
+# You can specify multiple suffix as a list of string:
+#
+# source_suffix = ['.rst', '.md']
+source_suffix = [".rst", ".md"]
+
+rst_prolog = ""
+
+# The master toctree document.
+master_doc = "index"
+
+# The language for content autogenerated by Sphinx. Refer to documentation
+# for a list of supported languages.
+#
+# This is also used if you do content translation via gettext catalogs.
+# Usually you set "language" from the command line for these cases.
+language = None
+
+# List of patterns, relative to source directory, that match files and
+# directories to ignore when looking for source files.
+# This pattern also affects html_static_path and html_extra_path.
+exclude_patterns = ["_build", "Thumbs.db", ".DS_Store", "README.md"]
+
+# The name of the Pygments (syntax highlighting) style to use.
+pygments_style = None
+
+
+# -- Options for HTML output -------------------------------------------------
+
+# The theme to use for HTML and HTML Help pages. See the documentation for
+# a list of builtin themes.
+#
+html_theme = "sphinx_rtd_theme"
+
+# Theme options are theme-specific and customize the look and feel of a theme
+# further. For a list of options available for each theme, see the
+# documentation.
+#
+html_theme_options = {
+ "display_version": True,
+ "prev_next_buttons_location": "bottom",
+ # Toc options
+ "collapse_navigation": True,
+ "sticky_navigation": True,
+ "navigation_depth": 2,
+}
+
+# Add any paths that contain custom static files (such as style sheets) here,
+# relative to this directory. They are copied after the builtin static files,
+# so a file named "default.css" will overwrite the builtin "default.css".
+html_static_path = ["_static"]
+
+# Custom sidebar templates, must be a dictionary that maps document names
+# to template names.
+#
+# The default sidebars (for documents that don't match any pattern) are
+# defined by theme itself. Builtin themes are using these templates by
+# default: ``['localtoc.html', 'relations.html', 'sourcelink.html',
+# 'searchbox.html']``.
+#
+# html_sidebars = {}
+
+
+rst_prolog += """
+.. important::
+ From your browser, you can
+ `try Pyodide in an Iodide notebook <https://alpha.iodide.io/>`_.
+ The `Iodide documentation site <https://iodide-project.github.io/docs/>`_
+ provides additional user and developer documentation.
+"""
+
+# -- Options for HTMLHelp output ---------------------------------------------
+
+# Output file base name for HTML help builder.
+htmlhelp_basename = "Pyodidedoc"
+
+
+# -- Options for LaTeX output ------------------------------------------------
+
+latex_elements = {
+ # The paper size ('letterpaper' or 'a4paper').
+ #
+ # 'papersize': 'letterpaper',
+ # The font size ('10pt', '11pt' or '12pt').
+ #
+ # 'pointsize': '10pt',
+ # Additional stuff for the LaTeX preamble.
+ #
+ # 'preamble': '',
+ # Latex figure (float) alignment
+ #
+ # 'figure_align': 'htbp',
+}
+
+# Grouping the document tree into LaTeX files. List of tuples
+# (source start file, target name, title,
+# author, documentclass [howto, manual, or own class]).
+latex_documents = [
+ (master_doc, "Pyodide.tex", "Pyodide Documentation", "Mozilla?", "manual")
+]
+
+
+# -- Options for manual page output ------------------------------------------
+
+# One entry per manual page. List of tuples
+# (source start file, name, description, authors, manual section).
+man_pages = [(master_doc, "pyodide", "Pyodide Documentation", [author], 1)]
+
+
+# -- Options for Texinfo output ----------------------------------------------
+
+# Grouping the document tree into Texinfo files. List of tuples
+# (source start file, target name, title, author,
+# dir menu entry, description, category)
+texinfo_documents = [
+ (
+ master_doc,
+ "Pyodide",
+ "Pyodide Documentation",
+ author,
+ "Pyodide",
+ "One line description of project.",
+ "Miscellaneous",
+ )
+]
+
+
+# -- Options for Epub output -------------------------------------------------
+
+# Bibliographic Dublin Core info.
+epub_title = project
+
+# The unique identifier of the text. This can be a ISBN number
+# or the project homepage.
+#
+# epub_identifier = ''
+
+# A unique identification for the text.
+#
+# epub_uid = ''
+
+# A list of files that should not be packed into the epub file.
+epub_exclude_files = ["search.html"]
+
+
+# -- Extension configuration -------------------------------------------------
| Add documentation to ReadTheDocs
Hi @mdboom π
To increase visibility and discovery of the project, perhaps creating a sphinx site on RTD using the Markdown files in the repo would be helpful.
If you have interest, I could do a quick doc PR this week.
| Just realized there are GitHub docs. Brief docs on RTD would be helpful too.
Thanks, Carol! I definitely agree we need better and more discoverable docs. I'm curious what advantages you see to readthedocs? I'm wary to add more moving parts to the deployment, but maybe there's a reason to...?
@mdboom Sorry for the delay. The Python science community, as you know, will search for docs on RTD. I'm trying to remember back to when I filed this issue but I seem to recall that it took me a while (> 15 minutes) to realize that there were docs. :wink: | 2019-06-19T22:27:08 |
|
pyodide/pyodide | 689 | pyodide__pyodide-689 | [
"688"
] | ee3e5e44c380d93fa508b0f11a43796f2b1d37c2 | diff --git a/pyodide_build/mkpkg.py b/pyodide_build/mkpkg.py
--- a/pyodide_build/mkpkg.py
+++ b/pyodide_build/mkpkg.py
@@ -75,7 +75,7 @@ def make_package(package, version=None):
def make_parser(parser):
parser.description = '''
Make a new pyodide package. Creates a simple template that will work
-for most pure Python packages, but will have to be edited for more wv
+for most pure Python packages, but will have to be edited for more
complex things.'''.strip()
parser.add_argument(
'package', type=str, nargs=1,
| Typo?
Wondering what "wv" means or if this should read "more complex things."
https://github.com/iodide-project/pyodide/blob/163ab43b64180223d010cdcdcdecd17307cc5a45/pyodide_build/mkpkg.py#L77-L79
| Thanks! Yes, I think so. Don't hesitate to open a PR to fix it... | 2020-06-16T21:25:25 |
|
pyodide/pyodide | 717 | pyodide__pyodide-717 | [
"704"
] | 018f4a23c9765fc6b0c0b3439d1bea3393924f2e | diff --git a/pyodide_build/common.py b/pyodide_build/common.py
--- a/pyodide_build/common.py
+++ b/pyodide_build/common.py
@@ -33,7 +33,7 @@ def parse_package(package):
# TODO: Validate against a schema
with open(package) as fd:
- return yaml.load(fd)
+ return yaml.safe_load(fd)
def _parse_package_subset(query: Optional[str]) -> Optional[Set[str]]:
| Calling yaml.load() without Loader=... is deprecated
For each built packages there is now the following deprecation warning ,
```
pyodide_build/common.py:27: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
return yaml.load(fd)
```
it would be nice to fix this.
| 2020-07-10T19:04:55 |
||
pyodide/pyodide | 809 | pyodide__pyodide-809 | [
"808"
] | f7adad7eb30a54ce4ed82e6e15950931c77dd1cc | diff --git a/tools/file_packager.py b/tools/file_packager.py
old mode 100644
new mode 100755
--- a/tools/file_packager.py
+++ b/tools/file_packager.py
@@ -1,3 +1,5 @@
+#!/usr/bin/env python3
+
# flake8: noqa
# This is forked from emscripten 1.38.22, with the original copyright notice
@@ -72,20 +74,7 @@
import uuid
import ctypes
-emscripten_dir = os.path.join(
- os.path.dirname(os.path.dirname(os.path.abspath(__file__))),
- 'emsdk', 'emsdk', 'fastcomp', 'emscripten'
-)
-tag_dir = sorted(os.listdir(emscripten_dir), key=lambda x: len(x))[0]
-sys.path.insert(1, emscripten_dir)
-
-from tools.toolchain_profiler import ToolchainProfiler
-if __name__ == '__main__':
- ToolchainProfiler.record_process_start()
-
import posixpath
-from tools import shared
-from tools.jsrun import run_js
from subprocess import PIPE
import fnmatch
import json
@@ -251,6 +240,10 @@
}
'''
+def escape_for_js_string(s):
+ s = s.replace('\\', '/').replace("'", "\\'").replace('"', '\\"')
+ return s
+
def has_hidden_attribute(filepath):
"""Win32 code to test whether the given file has the hidden property set."""
@@ -534,9 +527,18 @@ def was_seen(name):
}
'''
use_data += (" Module['removeRunDependency']('datafile_%s');\n"
- % shared.JS.escape_for_js_string(data_target))
+ % escape_for_js_string(data_target))
else:
+ emscripten_dir = os.path.join(
+ os.path.dirname(os.path.dirname(os.path.abspath(__file__))),
+ 'emsdk', 'emsdk', 'fastcomp', 'emscripten'
+ )
+ sys.path.insert(1, emscripten_dir)
+
+ from tools import shared
+ from tools.jsrun import run_js
+
# LZ4FS usage
temp = data_target + '.orig'
shutil.move(data_target, temp)
@@ -551,7 +553,7 @@ def was_seen(name):
assert(typeof Module.LZ4 === 'object', 'LZ4 not present - was your app build with -s LZ4=1 ?');
Module.LZ4.loadPackage({ 'metadata': metadata, 'compressedData': compressedData });
Module['removeRunDependency']('datafile_%s');
- ''' % (meta, shared.JS.escape_for_js_string(data_target))
+ ''' % (meta, escape_for_js_string(data_target))
package_uuid = uuid.uuid4()
package_name = data_target
@@ -574,8 +576,8 @@ def was_seen(name):
err('warning: you defined Module.locateFilePackage, that has been renamed to Module.locateFile (using your locateFilePackage for now)');
}
var REMOTE_PACKAGE_NAME = Module['locateFile'] ? Module['locateFile'](REMOTE_PACKAGE_BASE, '') : REMOTE_PACKAGE_BASE;
- ''' % (shared.JS.escape_for_js_string(data_target),
- shared.JS.escape_for_js_string(remote_package_name))
+ ''' % (escape_for_js_string(data_target),
+ escape_for_js_string(remote_package_name))
metadata['remote_package_size'] = remote_package_size
metadata['package_uuid'] = str(package_uuid)
ret += '''
@@ -802,7 +804,7 @@ def was_seen(name):
%s
};
Module['addRunDependency']('datafile_%s');
- ''' % (use_data, shared.JS.escape_for_js_string(data_target))
+ ''' % (use_data, escape_for_js_string(data_target))
# use basename because from the browser's point of view,
# we need to find the datafile in the same dir as the html file
| Standalone, dependency-free script to create pure python pyodide package
As a downstream user, I have some pure python packages that I want to ship with pyodide, and I would want this process to be as straightforward as possible.
Currently, to create the package suitable for pyodide consumption, I need to have quite a bit of the pyodide source code available. In particular, it requires various emscripten tools to be present (see also https://github.com/iodide-project/pyodide/issues/228#issuecomment-434940415).
In reality, the amount of code needed to produce this python package is fairly minimal --- one concatenates all the files in the python package, and then write some metadata and glue into the corresponding javascript file. This is especially true if we forgo lz4 compression (experimentally, lz4 results in more bytes transferred across the wire anyway).
I have written a short python script to do exactly that: https://github.com/SpectralSequences/sseq/blob/master/chart/python/packager.py . This is tuned fairly specificly to my package, but it is not difficult to generalize slightly to make it suitable for generic pure python packages. It would be great if a similar script is available from upstream.
I think the most reasonable way forward is to tidy up `file_packager.py` to avoid external dependencies, and instruct users on how to use it. The main changes required would be
1. Vendor the `shared.JS.escape_for_js_string` function that is currently imported from emscripten's scripts
2. Load the `tools.jsrun.run_js` function only when lz4 compression is enabled (this is used to run the lz4-compression script, which is written in javascript)
If there is no intention for our `file_packager.py` to track upstream, various config options can be eliminated to simplify maintenance.
I would be happy to implement this.
| 2020-11-29T06:23:34 |
||
pyodide/pyodide | 872 | pyodide__pyodide-872 | [
"867",
"870"
] | c4548db3c012bb28b58edc316991a821d6fc0bec | diff --git a/packages/micropip/micropip/micropip.py b/packages/micropip/micropip/micropip.py
--- a/packages/micropip/micropip/micropip.py
+++ b/packages/micropip/micropip/micropip.py
@@ -198,7 +198,7 @@ def do_resolve(*args):
self.installed_packages[name] = ver
def add_requirement(self, requirement: str, ctx, transaction):
- if requirement.startswith(("http://", "https://")):
+ if requirement.endswith(".whl"):
# custom download location
name, wheel, version = _parse_wheel_url(requirement)
transaction["wheels"].append((name, wheel, version))
@@ -268,23 +268,28 @@ def version_number(release):
def install(requirements: Union[str, List[str]]):
"""Install the given package and all of its dependencies.
- This only works for pure Python wheels or for packages built
- in pyodide. If a package is not found in the pyodide repository
- it will be loaded from PyPi.
+ See :ref:`loading packages <loading_packages>` for more information.
+
+ This only works for packages that are either pure Python or for packages with
+ C extensions that are built in pyodide. If a pure Python package is not found
+ in the pyodide repository it will be loaded from PyPi.
Parameters
----------
requirements
- a requirements or a list of requirements to install.
- Can be composed either of
+ A requirement or list of requirements to install.
+ Each requirement is a string.
+
+ - If the requirement ends in ".whl", the file will be interpreted as a url.
+ The file must be a wheel named in compliance with the
+ [PEP 427 naming convention](https://www.python.org/dev/peps/pep-0427/#file-format)
- - package names, as defined in pyodide repository or on PyPi
- - URLs pointing to pure Python wheels. The file name of such wheels
- end with ``none-any.whl``.
+ - A package name. A package by this name must either be present in the pyodide
+ repository at `languagePluginUrl` or on PyPi.
Returns
-------
- a Promise that resolves when all packages have downloaded and installed.
+ A Promise that resolves when all packages have been downloaded and installed.
"""
def do_install(resolve, reject):
| diff --git a/packages/micropip/test_micropip.py b/packages/micropip/test_micropip.py
--- a/packages/micropip/test_micropip.py
+++ b/packages/micropip/test_micropip.py
@@ -71,6 +71,43 @@ def test_install_custom_url(selenium_standalone, web_server_tst_data):
selenium_standalone.run("import snowballstemmer")
+def test_add_requirement_relative_url():
+ pytest.importorskip("distlib")
+ import micropip
+
+ transaction = {"wheels": []}
+ micropip.PACKAGE_MANAGER.add_requirement(
+ "./snowballstemmer-2.0.0-py2.py3-none-any.whl", {}, transaction
+ )
+ [name, req, version] = transaction["wheels"][0]
+ assert name == "snowballstemmer"
+ assert version == "2.0.0"
+ assert req["filename"] == "snowballstemmer-2.0.0-py2.py3-none-any.whl"
+ assert req["packagetype"] == "bdist_wheel"
+ assert req["python_version"] == "py2.py3"
+ assert req["abi_tag"] == "none"
+ assert req["platform"] == "any"
+ assert req["url"] == "./snowballstemmer-2.0.0-py2.py3-none-any.whl"
+
+
+def test_install_custom_relative_url(selenium_standalone):
+ selenium_standalone.load_package("micropip")
+ selenium_standalone.run("import micropip")
+
+ root = Path(__file__).resolve().parents[2]
+ src = root / "src" / "tests" / "data"
+ target = root / "build" / "test_data"
+ target.symlink_to(src, True)
+ try:
+ url = "./test_data/snowballstemmer-2.0.0-py2.py3-none-any.whl"
+ selenium_standalone.run(f"micropip.install('{url}')")
+ # wait untill micropip is loaded
+ time.sleep(1)
+ selenium_standalone.run("import snowballstemmer")
+ finally:
+ target.unlink()
+
+
def test_last_version_from_pypi():
pytest.importorskip("distlib")
import micropip
| Allow Micropip to download from relative urls
I think to allow relative urls, we'd only need to change the check here:
https://github.com/iodide-project/pyodide/blob/3a06f5dfcb9b536e9ece1f68f6963717acf82486/packages/micropip/micropip/micropip.py#L201
to `if "/" in requirement` or `if requirement.endswith(".whl")`.
Also, the documentation of `micropip.install` is a bit lacking. It'd be good to add an explanation of how `micropip` decides what a url is.
But for instance, it could be helpful to indicate the case where `url` is `"some_package-vers-py3-none-any.whl"`: does this expect a file `some_package-vers-py3-none-any.whl` to be in the current directory?
Would be good to mention that all wheels need to be named according to pep 427 too.
https://www.python.org/dev/peps/pep-0427/#file-name-convention
Redundancies / differences between pyodide.loadPackage and micropip.install?
It seems to me that `pyodide.loadPackage` and `micropip.install` have significant redundancies in their functionality. Is there any difference in their purpose? Which one is better? Could they be merged? If not, it would be good to add a very explicit explanation of their differences to the docs.
| > change the check here: [...] to `if requirement.endswith(".whl")`
+1 for this. Would you like to make a PR (and improve documentation) ? :)
There are currently two places for micropip documentation,
- the [user manual](https://pyodide.readthedocs.io/en/latest/loading_packages.html#micropip)
- the [docstring](https://pyodide.readthedocs.io/en/latest/micropip-api/micropip.install.html#micropip.install)
and it would be good to cross link them better. With typically a short description in the docstring (but still more extensive than what we have now) linking to the user manual for more details.
> Would you like to make a PR (and improve documentation) ?
Yeah I'll do that. I've been planning to make some PRs to improve the docs but I keep being struck by laziness.
| 2020-12-16T21:18:22 |
pyodide/pyodide | 891 | pyodide__pyodide-891 | [
"888"
] | dfc509f1f102cc1bf560b59527ef9a4805819314 | diff --git a/src/pyodide-py/pyodide/__init__.py b/src/pyodide-py/pyodide/__init__.py
--- a/src/pyodide-py/pyodide/__init__.py
+++ b/src/pyodide-py/pyodide/__init__.py
@@ -1,6 +1,13 @@
-from ._base import open_url, eval_code, find_imports, as_nested_list
+from ._base import open_url, eval_code, find_imports, as_nested_list, JsException
from .console import get_completions
__version__ = "0.15.0"
-__all__ = ["open_url", "eval_code", "find_imports", "as_nested_list", "get_completions"]
+__all__ = [
+ "open_url",
+ "eval_code",
+ "find_imports",
+ "as_nested_list",
+ "get_completions",
+ "JsException",
+]
diff --git a/src/pyodide-py/pyodide/_base.py b/src/pyodide-py/pyodide/_base.py
--- a/src/pyodide-py/pyodide/_base.py
+++ b/src/pyodide-py/pyodide/_base.py
@@ -1,6 +1,8 @@
"""
A library of helper utilities for connecting Python to the browser environment.
"""
+# Added by C:
+# JsException (from jsproxy.c)
import ast
from io import StringIO
@@ -8,6 +10,16 @@
from typing import Dict, List, Any
+class JsException(Exception):
+ """
+ A wrapper around a Javascript Error to allow the Error to be thrown in Python.
+ """
+
+ # This gets overwritten in jsproxy.c, it is just here for autodoc and humans
+ # reading this file.
+ pass
+
+
def open_url(url: str) -> StringIO:
"""
Fetches a given URL
| diff --git a/src/tests/test_typeconversions.py b/src/tests/test_typeconversions.py
--- a/src/tests/test_typeconversions.py
+++ b/src/tests/test_typeconversions.py
@@ -271,3 +271,39 @@ class Point {
with pytest.raises(WebDriverException, match=msg):
selenium.run("point.y")
assert selenium.run_js("return point.y;") is None
+
+
+def test_javascript_error(selenium):
+ msg = "JsException: Error: This is a js error"
+ with pytest.raises(WebDriverException, match=msg):
+ selenium.run(
+ """
+ from js import Error
+ err = Error.new("This is a js error")
+ err2 = Error.new("This is another js error")
+ raise err
+ """
+ )
+
+
+def test_javascript_error_back_to_js(selenium):
+ selenium.run_js(
+ """
+ window.err = new Error("This is a js error")
+ """
+ )
+ assert (
+ selenium.run(
+ """
+ from js import err
+ py_err = err
+ type(py_err).__name__
+ """
+ )
+ == "JsException"
+ )
+ assert selenium.run_js(
+ """
+ return pyodide.globals["py_err"] === err
+ """
+ )
| JsProxy does not handle Error correctly
Example:
```python
from js import Error
e = Error.new("Hi")
raise e
```
Raises: `TypeError: exceptions must derive from BaseException`.
This came up in https://github.com/iodide-project/pyodide/pull/880#pullrequestreview-555341317. I will try to fix this.
| 2020-12-18T21:58:47 |
|
pyodide/pyodide | 931 | pyodide__pyodide-931 | [
"930"
] | 8de2ed12028c6757afcd916bff8f53eb05f69af3 | diff --git a/pyodide_build/buildall.py b/pyodide_build/buildall.py
--- a/pyodide_build/buildall.py
+++ b/pyodide_build/buildall.py
@@ -220,11 +220,15 @@ def build_packages(packages_dir: Path, outputdir: Path, args) -> None:
"import_name_to_package_name": {},
}
+ libraries = [pkg.name for pkg in pkg_map.values() if pkg.library]
+
for name, pkg in pkg_map.items():
if pkg.library:
continue
- package_data["dependencies"][name] = pkg.dependencies
+ package_data["dependencies"][name] = [
+ x for x in pkg.dependencies if x not in libraries
+ ]
for imp in pkg.meta.get("test", {}).get("imports", [name]):
package_data["import_name_to_package_name"][imp] = name
| CLAPACK.js is missing from the dev branch
For some reason, I started to see the CLAPACK.js missing error when using pyodide from https://cdn.jsdelivr.net/pyodide/dev/full/:
```
Couldn't load package from URL https://cdn.jsdelivr.net/pyodide/dev/full/CLAPACK.js
```
| This is due to #927, because CLAPACK is now listed as a dependency of
scipy, despite the fact that it is already statically linked.
| 2020-12-23T15:57:44 |
|
pyodide/pyodide | 987 | pyodide__pyodide-987 | [
"658"
] | a211802b4428c86f227240dffc5e0031fd5b2e03 | diff --git a/src/pyodide-py/pyodide/console.py b/src/pyodide-py/pyodide/console.py
--- a/src/pyodide-py/pyodide/console.py
+++ b/src/pyodide-py/pyodide/console.py
@@ -32,6 +32,6 @@ def get_completions(
cursor = len(code)
code = code[:cursor]
interp = jedi.Interpreter(code, namespaces)
- completions = interp.completions()
+ completions = interp.complete()
return [x.name for x in completions]
| diff --git a/packages/jedi/test_jedi.py b/packages/jedi/test_jedi.py
new file mode 100644
--- /dev/null
+++ b/packages/jedi/test_jedi.py
@@ -0,0 +1,22 @@
+def test_jedi(selenium_standalone):
+ selenium_standalone.load_package("jedi")
+ result = selenium_standalone.run(
+ """
+ import jedi
+ script = jedi.Script("import json\\njson.lo", path='example.py')
+ completions = script.complete(2, len('json.lo'))
+ [el.name for el in completions]
+ """
+ )
+ assert result == ["load", "loads"]
+
+
+def test_completions(selenium_standalone):
+ selenium_standalone.load_package("jedi")
+ result = selenium_standalone.run(
+ """
+ import pyodide
+ pyodide.get_completions('import sys\\nsys.v')
+ """
+ )
+ assert result == ["version", "version_info"]
diff --git a/src/tests/test_python.py b/src/tests/test_python.py
--- a/src/tests/test_python.py
+++ b/src/tests/test_python.py
@@ -176,16 +176,6 @@ def test_unknown_attribute(selenium):
)
-def test_completions(selenium):
- result = selenium.run(
- """
- import pyodide
- pyodide.get_completions('import sys\\nsys.v')
- """
- )
- assert result == ["version", "version_info"]
-
-
def test_run_python_debug(selenium):
assert selenium.run_js("return pyodide._module.runPythonDebug('1+1');") == 2
assert selenium.run_js(
| Add PYODIDE_MINIMAL build option
From the added documentation,
> Minimal pyodide build can be enabled by setting the `PYODIDE_MINIMAL`
environment variable. For instance,
> ```
> PYODIDE_MINIMAL=true PYODIDE_PACKAGES="micropip" make
> ```
>
> This will,
> - not include freetype and libpng libraries (it won't be possible to build matplotlib)
> - not include the jedi library, disabling auto-completion in iodide
>
> As as a result the size will of the core pyodide binaries will be ~15% smaller.
Addresses two points from https://github.com/iodide-project/pyodide/issues/646
Before (master),
```
6,6M pyodide.asm.data
310K pyodide.asm.data.js
2,8M pyodide.asm.js
11M pyodide.asm.wasm
16K pyodide.js
16K pyodide_dev.js
Total: 20.7 MB
```
after (this PR with PYODIDE_MINIMAL=true)
```
5,1M build/pyodide.asm.data
124K build/pyodide.asm.data.js
2,6M build/pyodide.asm.js
9,9M build/pyodide.asm.wasm
16K build/pyodide.js
16K build/pyodide_dev.js
Total: 17.7 MB
```
so it's not that different (14% less), but it's start.
Draft PR for now, as I think I need to go in a bit more details through tests that are run in the minimal build CI job.
| 2020-12-31T11:21:47 |
|
pyodide/pyodide | 1,027 | pyodide__pyodide-1027 | [
"713"
] | d15fe32c216ab63603a8b4068a99ed600308f104 | diff --git a/pyodide_build/buildall.py b/pyodide_build/buildall.py
--- a/pyodide_build/buildall.py
+++ b/pyodide_build/buildall.py
@@ -40,37 +40,28 @@ def __init__(self, pkgdir: Path):
def build(self, outputdir: Path, args) -> None:
with open(self.pkgdir / "build.log", "w") as f:
- if self.library:
- p = subprocess.run(
- ["make"],
- cwd=self.pkgdir,
- check=False,
- stdout=f,
- stderr=subprocess.STDOUT,
- )
- else:
- p = subprocess.run(
- [
- sys.executable,
- "-m",
- "pyodide_build",
- "buildpkg",
- str(self.pkgdir / "meta.yaml"),
- "--cflags",
- args.cflags,
- "--cxxflags",
- args.cxxflags,
- "--ldflags",
- args.ldflags,
- "--target",
- args.target,
- "--install-dir",
- args.install_dir,
- ],
- check=False,
- stdout=f,
- stderr=subprocess.STDOUT,
- )
+ p = subprocess.run(
+ [
+ sys.executable,
+ "-m",
+ "pyodide_build",
+ "buildpkg",
+ str(self.pkgdir / "meta.yaml"),
+ "--cflags",
+ args.cflags,
+ "--cxxflags",
+ args.cxxflags,
+ "--ldflags",
+ args.ldflags,
+ "--target",
+ args.target,
+ "--install-dir",
+ args.install_dir,
+ ],
+ check=False,
+ stdout=f,
+ stderr=subprocess.STDOUT,
+ )
try:
p.check_returncode()
diff --git a/pyodide_build/buildpkg.py b/pyodide_build/buildpkg.py
--- a/pyodide_build/buildpkg.py
+++ b/pyodide_build/buildpkg.py
@@ -92,7 +92,8 @@ def download_and_extract(
tarballname = tarballname[: -len(extension)]
break
- return buildpath / tarballname
+ return buildpath / pkg["source"].get("extract_dir", tarballname)
+
elif "path" in pkg["source"]:
srcdir = Path(pkg["source"]["path"])
@@ -212,6 +213,23 @@ def package_files(buildpath: Path, srcpath: Path, pkg: Dict[str, Any], args):
fd.write(b"\n")
+def run_script(buildpath: Path, srcpath: Path, pkg: Dict[str, Any]):
+ # We don't really do packaging, but needs_rebuild checks .packaged to
+ # determine if it needs to rebuild
+ if (buildpath / ".packaged").is_file():
+ return
+
+ orig_path = Path.cwd()
+ os.chdir(srcpath)
+ try:
+ subprocess.run(["bash", "-c", pkg["build"]["script"]], check=True)
+ finally:
+ os.chdir(orig_path)
+
+ with open(buildpath / ".packaged", "wb") as fd:
+ fd.write(b"\n")
+
+
def needs_rebuild(pkg: Dict[str, Any], path: Path, buildpath: Path) -> bool:
"""
Determines if a package needs a rebuild because its meta.yaml, patches, or
@@ -240,7 +258,7 @@ def build_package(path: Path, args):
name = pkg["package"]["name"]
t0 = datetime.now()
print("[{}] Building package {}...".format(t0.strftime("%Y-%m-%d %H:%M:%S"), name))
- packagedir = name + "-" + pkg["package"]["version"]
+ packagedir = name + "-" + str(pkg["package"]["version"])
dirpath = path.parent
orig_path = Path.cwd()
os.chdir(dirpath)
@@ -254,8 +272,11 @@ def build_package(path: Path, args):
os.makedirs(buildpath)
srcpath = download_and_extract(buildpath, packagedir, pkg, args)
patch(path, srcpath, pkg, args)
- compile(path, srcpath, pkg, args)
- package_files(buildpath, srcpath, pkg, args)
+ if pkg.get("build", {}).get("library"):
+ run_script(buildpath, srcpath, pkg)
+ else:
+ compile(path, srcpath, pkg, args)
+ package_files(buildpath, srcpath, pkg, args)
finally:
os.chdir(orig_path)
t1 = datetime.now()
diff --git a/pyodide_build/pywasmcross.py b/pyodide_build/pywasmcross.py
--- a/pyodide_build/pywasmcross.py
+++ b/pyodide_build/pywasmcross.py
@@ -287,7 +287,7 @@ def handle_command(line, args, dryrun=False):
output = arg
# Fix for scipy to link to the correct BLAS/LAPACK files
- if arg.startswith("-L") and "CLAPACK-WA" in arg:
+ if arg.startswith("-L") and "CLAPACK" in arg:
out_idx = line.index("-o")
out_idx += 1
module_name = line[out_idx]
| Build C libraries with the Python build system
Currently numerous C libraries (e.g. lz4, libxml) and also some packages (e.g. six, parso) are packaged with Makefiles in the root folder of pyodide. I this can be confusing for new contributors and is not sustainable long term, as it makes it hard to distinguish what is pyodide code and what are packages.
Related to https://github.com/iodide-project/pyodide/issues/38
I think we should,
- [x] move all of those libraires under `packages/` folder (while keeping the use of Makefiles). This would be just a path change. Done in https://github.com/iodide-project/pyodide/pull/714
- [x] migrate these libraries to be defined with a `meta.yaml` and build them with the Python build system. This would allow managing dependencies better (e.g. build LAPACK only when scipy is built etc).
| To give a more concrete example for step 2, consider [`packages/zlib/Makefile`](https://github.com/iodide-project/pyodide/blob/master/packages/zlib/Makefile), we would want to replace it with something along the lines of,
**packages/zlib/meta.yaml**
```yaml
package:
name: zlib
version: 1.2.11
source:
sha256: c3e5e9fdd5004dcb542feda5ee4f0ff0744628baf8ed2dd5d66f8ca1197cb1a1
url: https://zlib.net/zlib-1.2.11.tar.gz
build:
script:
- emconfigure ./configure --prefix=$(SRC)
- emmake make install -j ${PYODIDE_JOBS:-3}
```
and then,
1. adjust [pyodide_build/buildpkg.py](https://github.com/iodide-project/pyodide/blob/master/pyodide_build/buildpkg.py#L230) to these build script commands if they are available and fall back to the `python setup.py` command otherwise as it's done now.
2. figure out how to pass environment variables such as `PYODIDE_JOBS` and also some paths variables (here `$SRC` which is the current dir). There is probably some standard approaches of doing this with conda `meta.yaml`. It would require reading the `meta.yaml` spec and maybe also browsing a few examples on conda-forge.
3. Add `zlib` as a requirements to any package that needs it, here lxml.
4. Remove the corresponding command form the [main Makefile](https://github.com/iodide-project/pyodide/blob/bc3ffaae427e35b9ecd9a6216fadad031a363a47/Makefile#L268).
**Note:** zlib is required by libxml, so it would likely need to be ported at the same time. Or one could start with a different package, for instance lz4. | 2021-01-03T12:39:19 |
|
pyodide/pyodide | 1,102 | pyodide__pyodide-1102 | [
"476"
] | e5672449f9655c1376d72fd09fbac4e2f1ff0e27 | diff --git a/pyodide_build/buildpkg.py b/pyodide_build/buildpkg.py
--- a/pyodide_build/buildpkg.py
+++ b/pyodide_build/buildpkg.py
@@ -215,10 +215,12 @@ def package_files(buildpath: Path, srcpath: Path, pkg: Dict[str, Any], args):
def run_script(buildpath: Path, srcpath: Path, pkg: Dict[str, Any]):
- # We don't really do packaging, but needs_rebuild checks .packaged to
- # determine if it needs to rebuild
- if (buildpath / ".packaged").is_file():
- return
+ if pkg.get("build", {}).get("library"):
+ # in libraries this writes the packaged flag
+ # We don't really do packaging, but needs_rebuild checks .packaged to
+ # determine if it needs to rebuild
+ if (buildpath / ".packaged").is_file():
+ return
orig_path = Path.cwd()
os.chdir(srcpath)
@@ -275,11 +277,9 @@ def build_package(path: Path, args):
os.makedirs(buildpath)
srcpath = download_and_extract(buildpath, packagedir, pkg, args)
patch(path, srcpath, pkg, args)
- if pkg.get("build", {}).get("library"):
+ if pkg.get("build", {}).get("script"):
run_script(buildpath, srcpath, pkg)
- else:
- if pkg.get("build", {}).get("script"):
- run_script(buildpath, srcpath, pkg)
+ if not pkg.get("build", {}).get("library", False):
compile(path, srcpath, pkg, args)
package_files(buildpath, srcpath, pkg, args)
finally:
diff --git a/pyodide_build/common.py b/pyodide_build/common.py
--- a/pyodide_build/common.py
+++ b/pyodide_build/common.py
@@ -15,9 +15,9 @@
"-O2",
"-Werror",
"-s", "EMULATE_FUNCTION_POINTER_CASTS=1",
+ "-s",'BINARYEN_EXTRA_PASSES="--pass-arg=max-func-params@61"',
"-s", "SIDE_MODULE=1",
"-s", "WASM=1",
- "-s", "BINARYEN_TRAP_MODE='clamp'",
"--memory-init-file", "0",
"-s", "LINKABLE=1",
"-s", "EXPORT_ALL=1",
diff --git a/pyodide_build/pywasmcross.py b/pyodide_build/pywasmcross.py
--- a/pyodide_build/pywasmcross.py
+++ b/pyodide_build/pywasmcross.py
@@ -27,7 +27,7 @@
import importlib.machinery
import json
import os
-from pathlib import Path
+from pathlib import Path, PurePosixPath
import re
import subprocess
import shutil
@@ -252,7 +252,10 @@ def handle_command(line, args, dryrun=False):
# distutils doesn't use the c++ compiler when compiling c++ <sigh>
if any(arg.endswith((".cpp", ".cc")) for arg in line):
new_args = ["em++"]
- library_output = line[-1].endswith(".so")
+ library_output = False
+ for arg in line:
+ if arg.endswith(".so") and not arg.startswith("-"):
+ library_output = True
if library_output:
new_args.extend(args.ldflags.split())
@@ -263,6 +266,8 @@ def handle_command(line, args, dryrun=False):
lapack_dir = None
+ used_libs = set()
+
# Go through and adjust arguments
for arg in line[1:]:
if arg.startswith("-I"):
@@ -278,9 +283,19 @@ def handle_command(line, args, dryrun=False):
if arg.startswith("-L/usr"):
continue
if arg.startswith("-l"):
- if arg[2:] in replace_libs:
- arg = "-l" + replace_libs[arg[2:]]
-
+ for lib_name in replace_libs.keys():
+ # this enables glob style **/* matching
+ if PurePosixPath(arg[2:]).match(lib_name):
+ if len(replace_libs[lib_name]) > 0:
+ arg = "-l" + replace_libs[lib_name]
+ else:
+ continue
+ if arg.startswith("-l"):
+ # WASM link doesn't like libraries being included twice
+ # skip second one
+ if arg in used_libs:
+ continue
+ used_libs.add(arg)
# threading is disabled for now
if arg == "-pthread":
continue
@@ -291,8 +306,14 @@ def handle_command(line, args, dryrun=False):
# definitely isn't
arg = re.sub(r"/python([0-9]\.[0-9]+)m", r"/python\1", arg)
if arg.endswith(".so"):
- arg = arg[:-3] + ".wasm"
output = arg
+ # don't include libraries from native builds
+ if (
+ len(args.install_dir) > 0
+ and arg.startswith("-l" + args.install_dir)
+ or arg.startswith("-L" + args.install_dir)
+ ):
+ continue
# Fix for scipy to link to the correct BLAS/LAPACK files
if arg.startswith("-L") and "CLAPACK" in arg:
@@ -362,14 +383,14 @@ def handle_command(line, args, dryrun=False):
# Emscripten .so files shouldn't have the native platform slug
if library_output:
- renamed = output[:-5] + ".so"
+ renamed = output
for ext in importlib.machinery.EXTENSION_SUFFIXES:
if ext == ".so":
continue
if renamed.endswith(ext):
renamed = renamed[: -len(ext)] + ".so"
break
- if not dryrun:
+ if not dryrun and output != renamed:
os.rename(output, renamed)
return new_args
| diff --git a/emsdk/tests/common.py b/emsdk/tests/common.py
--- a/emsdk/tests/common.py
+++ b/emsdk/tests/common.py
@@ -1,6 +1,15 @@
-MAIN_C = """
+MAIN_C = r"""
#include <stdio.h>
#include <dlfcn.h>
+#include <setjmp.h>
+
+void never_called()
+{
+ jmp_buf buf;
+ int i=setjmp(buf);
+ longjmp(buf,1);
+}
+
int main() {
puts("hello from main");
diff --git a/emsdk/tests/test_dyncall.py b/emsdk/tests/test_dyncall.py
--- a/emsdk/tests/test_dyncall.py
+++ b/emsdk/tests/test_dyncall.py
@@ -33,6 +33,7 @@ def test_dyncall(tmpdir):
assert(fp(i, 0, 0, 0) == i);
if (i == 0) longjmp(buf, 1);
+
}
"""
)
@@ -42,6 +43,7 @@ def test_dyncall(tmpdir):
subprocess.run(
[
"emcc",
+ "-g4",
"-s",
"SIDE_MODULE=1",
"library.c",
@@ -57,6 +59,7 @@ def test_dyncall(tmpdir):
subprocess.run(
[
"emcc",
+ "-g4",
"-s",
"MAIN_MODULE=1",
"main.c",
@@ -68,4 +71,4 @@ def test_dyncall(tmpdir):
check=True,
)
out = subprocess.run(["node", "a.out.js"], capture_output=True, check=True)
- assert out.stdout == b"hello from main\n0\n4\n"
+ assert out.stdout == b"hello from main\n0\n1\n"
diff --git a/emsdk/tests/test_emulate.py b/emsdk/tests/test_emulate.py
--- a/emsdk/tests/test_emulate.py
+++ b/emsdk/tests/test_emulate.py
@@ -9,8 +9,11 @@ def test_emulate_function(tmpdir):
"""\
#include <stdio.h>
-void foo() {
+// emulate function pointer casts - this has the
+// wrong arguments and return type
+int foo(int extra_args) {
puts("hello from library");
+ return 0;
}"""
)
with open("main.c", "w") as f:
@@ -19,6 +22,7 @@ def test_emulate_function(tmpdir):
subprocess.run(
[
"emcc",
+ "-g",
"-s",
"SIDE_MODULE=1",
"library.c",
@@ -34,6 +38,7 @@ def test_emulate_function(tmpdir):
subprocess.run(
[
"emcc",
+ "-g",
"-s",
"MAIN_MODULE=1",
"main.c",
@@ -41,6 +46,8 @@ def test_emulate_function(tmpdir):
"library.wasm",
"-s",
"EMULATE_FUNCTION_POINTER_CASTS=1",
+ "-s",
+ "EXPORT_ALL=1",
],
check=True,
)
diff --git a/packages/joblib/test_joblib.py b/packages/joblib/test_joblib.py
--- a/packages/joblib/test_joblib.py
+++ b/packages/joblib/test_joblib.py
@@ -2,8 +2,6 @@
def test_joblib_numpy_pickle(selenium, request):
- if selenium.browser == "chrome":
- request.applymarker(pytest.mark.xfail(run=False, reason="chrome not supported"))
selenium.load_package(["numpy", "joblib"])
selenium.run(
"""
diff --git a/packages/pandas/test_pandas.py b/packages/pandas/test_pandas.py
--- a/packages/pandas/test_pandas.py
+++ b/packages/pandas/test_pandas.py
@@ -35,15 +35,11 @@ def generate_largish_json(n_rows: int = 91746) -> Dict:
def test_pandas(selenium, request):
- if selenium.browser == "chrome":
- request.applymarker(pytest.mark.xfail(run=False, reason="chrome not supported"))
selenium.load_package("pandas")
assert len(selenium.run("import pandas\ndir(pandas)")) == 142
def test_extra_import(selenium, request):
- if selenium.browser == "chrome":
- request.applymarker(pytest.mark.xfail(run=False, reason="chrome not supported"))
selenium.load_package("pandas")
selenium.run("from pandas import Series, DataFrame, Panel")
@@ -52,9 +48,6 @@ def test_extra_import(selenium, request):
def test_load_largish_file(selenium_standalone, request, httpserver):
selenium = selenium_standalone
- if selenium.browser == "chrome":
- request.applymarker(pytest.mark.xfail(run=False, reason="chrome not supported"))
-
selenium.load_package("pandas")
selenium.load_package("matplotlib")
diff --git a/packages/scikit-learn/test_scikit-learn.py b/packages/scikit-learn/test_scikit-learn.py
--- a/packages/scikit-learn/test_scikit-learn.py
+++ b/packages/scikit-learn/test_scikit-learn.py
@@ -3,8 +3,6 @@
def test_scikit_learn(selenium_standalone, request):
selenium = selenium_standalone
- if selenium.browser == "chrome":
- request.applymarker(pytest.mark.xfail(run=False, reason="chrome not supported"))
selenium.load_package("scikit-learn")
assert (
selenium.run(
diff --git a/packages/scipy/test_scipy.py b/packages/scipy/test_scipy.py
--- a/packages/scipy/test_scipy.py
+++ b/packages/scipy/test_scipy.py
@@ -6,9 +6,6 @@
def test_scipy_linalg(selenium_standalone, request):
selenium = selenium_standalone
- if selenium.browser == "chrome":
- request.applymarker(pytest.mark.xfail(run=False, reason="chrome not supported"))
-
selenium.load_package("scipy")
cmd = dedent(
r"""
diff --git a/packages/test_common.py b/packages/test_common.py
--- a/packages/test_common.py
+++ b/packages/test_common.py
@@ -26,7 +26,7 @@ def registered_packages_meta():
}
-UNSUPPORTED_PACKAGES = {"chrome": ["pandas", "scipy", "scikit-learn"], "firefox": []}
+UNSUPPORTED_PACKAGES = {"chrome": [], "firefox": []}
@pytest.mark.parametrize("name", registered_packages())
@@ -68,34 +68,9 @@ def test_import(name, selenium_standalone):
))
"""
)
-
- selenium_standalone.load_package(name)
-
- # Make sure there are no additional .pyc file
- assert (
- selenium_standalone.run(
- """
- len(list(glob.glob(
- '/lib/python3.8/site-packages/**/*.pyc',
- recursive=True)
- ))
- """
- )
- == baseline_pyc
- )
-
loaded_packages = []
for import_name in meta.get("test", {}).get("imports", []):
-
- if name not in loaded_packages:
- selenium_standalone.load_package(name)
- loaded_packages.append(name)
- try:
- selenium_standalone.run("import %s" % import_name)
- except Exception:
- print(selenium_standalone.logs)
- raise
-
+ selenium_standalone.run_async("import %s" % import_name)
# Make sure that even after importing, there are no additional .pyc
# files
assert (
diff --git a/pyodide_build/tests/test_pywasmcross.py b/pyodide_build/tests/test_pywasmcross.py
--- a/pyodide_build/tests/test_pywasmcross.py
+++ b/pyodide_build/tests/test_pywasmcross.py
@@ -43,7 +43,7 @@ def test_handle_command():
assert handle_command_wrap("gcc test.c", args) == "emcc test.c"
assert (
handle_command_wrap("gcc -shared -c test.o -o test.so", args)
- == "emcc -c test.o -o test.wasm"
+ == "emcc -c test.o -o test.so"
)
# check cxxflags injection and cpp detection
@@ -66,7 +66,7 @@ def test_handle_command():
)
assert (
handle_command_wrap("gcc -shared -c test.o -o test.so", args)
- == "emcc -lm -c test.o -o test.wasm"
+ == "emcc -lm -c test.o -o test.so"
)
# check library replacement and removal of double libraries
@@ -80,7 +80,7 @@ def test_handle_command():
)
assert (
handle_command_wrap("gcc -shared test.o -lbob -ljim -ljim -o test.so", args)
- == "emcc test.o -lfred -ljim -ljim -o test.wasm"
+ == "emcc test.o -lfred -ljim -o test.so"
)
# compilation checks in numpy
@@ -106,7 +106,7 @@ def test_conda_compiler_compat():
)
assert handle_command_wrap(
"gcc -shared -c test.o -B /compiler_compat -o test.so", args
- ) == ("emcc -c test.o -o test.wasm")
+ ) == ("emcc -c test.o -o test.so")
def test_environment_var_substitution(monkeypatch):
| Migrate to upstream LLVM for emscripten backend
It looks like the now recommended compiler backend for emscripten is the upstream LLVM instead of fastcomp + asm2wasm (https://github.com/WebAssembly/binaryen/issues/2177#issuecomment-504269479).
This might also help with scipy dynamic linking, as the section of code that errors now in https://github.com/iodide-project/pyodide/pull/473 will no longer be run.
~~There is no tagged emsdk release with it yet, I think, but hopefully, one be released soon.~~
I don't have a very good understanding of how much work this would involve. I'm hoping existing emsdk patches would still apply. Maybe it would be necessary to update to the latest emsdk release first.
| Some WIP on this can be found in the https://github.com/iodide-project/pyodide/compare/master...rth:llvm-upstream branch. For one thing `EMULATED_FUNCTION_POINTERS` are not allowed when `WASM_BACKEND=1` (I'm not sure if they are still needed or not).
Also on that branch, the build fails with the following errors,
<details>
```
em++ -s EXPORT_NAME="'pyodide'" -o build/pyodide.asm.html src/main.bc src/jsimport.bc src/jsproxy.bc src/js2python.bc src/pyimport.bc src/pyproxy.bc src/python2js.bc src/python2js_buffer.bc src/runpython.bc src/hiwire.bc \
-O3 -s MODULARIZE=1 cpython/installs/python-3.7.0/lib/libpython3.7.a lz4/lz4-1.8.3/lib/liblz4.a -s "BINARYEN_METHOD='native-wasm'" -s TOTAL_MEMORY=1073741824 -s ALLOW_MEMORY_GROWTH=1 -s MAIN_MODULE=1 -s LINKABLE=1 -s EXPORT_ALL=1 -s EXPORTED_FUNCTIONS='["___cxa_guard_acquire", "__ZNSt3__28ios_base4initEPv"]' -s WASM=1 -s SWAPPABLE_ASM_MODULE=1 -s USE_FREETYPE=1 -s USE_LIBPNG=1 -std=c++14 -Lcpython/build/sqlite-autoconf-3270200/.libs -lsqlite3 -lstdc++ --memory-init-file 0 -s "BINARYEN_TRAP_MODE='clamp'" -s TEXTDECODER=0 -s LZ4=1 -s FORCE_FILESYSTEM=1
wasm-ld: error: lz4_5d1337a7.o: machine type must be wasm32
wasm-ld: error: lz4frame_75f13d14.o: machine type must be wasm32
wasm-ld: error: lz4hc_43143d2b.o: machine type must be wasm32
wasm-ld: error: xxhash_96cdb457.o: machine type must be wasm32
wasm-ld: error: sqlite3_6234be2a.o: machine type must be wasm32
shared:ERROR: '/src/emsdk/emsdk/upstream/bin/wasm-ld -o /tmp/emscripten_temp_hkjy2pic/pyodide.asm.wasm --allow-undefined --import-memory --import-table --lto-O0 --whole-archive src/main.bc src/jsimport.bc src/jsproxy.bc src/js2python.bc src/pyimport.bc src/pyproxy.bc src/python2js.bc src/python2js_buffer.bc src/runpython.bc src/hiwire.bc cpython/installs/python-3.7.0/lib/libpython3.7.a lz4/lz4-1.8.3/lib/liblz4.a cpython/build/sqlite-autoconf-3270200/.libs/libsqlite3.a /src/emsdk/emsdk/.emscripten_cache/wasm-obj-pic/libfreetype.a /src/emsdk/emsdk/.emscripten_cache/wasm-obj-pic/libpng.a /src/emsdk/emsdk/.emscripten_cache/wasm-obj-pic/libz.a /src/emsdk/emsdk/.emscripten_cache/wasm-obj-pic/libc++_noexcept.a /src/emsdk/emsdk/.emscripten_cache/wasm-obj-pic/libc++abi.a /src/emsdk/emsdk/.emscripten_cache/wasm-obj-pic/libc.a /src/emsdk/emsdk/.emscripten_cache/wasm-obj-pic/libcompiler_rt.a /src/emsdk/emsdk/.emscripten_cache/wasm-obj-pic/libc-wasm.a /src/emsdk/emsdk/.emscripten_cache/wasm-obj-pic/libdlmalloc.a /src/emsdk/emsdk/.emscripten_cache/wasm-obj-pic/libpthreads_stub.a /src/emsdk/emsdk/.emscripten_cache/wasm-obj-pic/libc-extras.a /src/emsdk/emsdk/.emscripten_cache/wasm-obj-pic/libcompiler_rt_wasm.a /src/emsdk/emsdk/.emscripten_cache/wasm-obj-pic/libc_rt_wasm.a --no-whole-archive -mllvm -combiner-global-alias-analysis=false -mllvm -enable-emscripten-sjlj -mllvm -disable-lsr --no-gc-sections --export-dynamic --export-all -pie -z stack-size=5242880 --initial-memory=1073741824 --no-entry' failed (1)
Makefile:64: recipe for target 'build/pyodide.asm.js' failed
make: *** [build/pyodide.asm.js] Error 1
```
</details>
Possibly related to https://github.com/emscripten-core/emscripten/issues/8321
hi,
> EMULATED_FUNCTION_POINTERS are not allowed when WASM_BACKEND=1(I'm not sure if they are still needed or not).
no they are no more required since python 3.8+ has the required fixes.
could be https://bugs.python.org/issue33012
side note function pointers were broken on fastcomp when using dynamic linking but it has been fixed on latest- emsdk.
but dynamic linking is still problematic on upstream with -fPIC
To move forward on this I think it might help doing the update more incrementally,
1. first update from 1.38.30 to 1.38.35 - that's the first release with the new emscripten build system, but with very minor changes in emscripten. Here the build, at least for CPython is working, there is just some issue to resolve when importing pyodide (and likely the fact that we might still need to patch [binaryen due for num_params](https://github.com/iodide-project/pyodide/blob/master/emsdk/patches/num_params.patch)) https://github.com/iodide-project/pyodide/pull/480
2. Update to the latest emscripten still using the fastcomp compiler.
3. Switch to the LLVM upstream compiler in emscripten .
I think directly updating to 3 as we tried to do in https://github.com/iodide-project/pyodide/pull/531 https://github.com/iodide-project/pyodide/pull/637 might require a bit too much work in one PR.
A quick status update on this,
- for the incremental approach outlined in the above comment we are still blocked at step 1. Building with emscripten 1.38.35 and fastcomp backend works but there is an issue with importing numpy https://github.com/iodide-project/pyodide/pull/480#issuecomment-635823062 (also opened https://github.com/numpy/numpy/issues/16804 about it, in case someone at numpy has suggestions, though it's probably a bit off topic for the numpy repo)
- for the non incremental approach (i.e. https://github.com/iodide-project/pyodide/pull/637 I updated it with latest master) there might be still things missing from https://github.com/iodide-project/pyodide/pull/531 and I couldn't find fixes by @manumartin discussed there. It sounds like we would need at least 1.38.42 there. The main concern with this approach, is that
- we can resolve build issues, however if something fail at import/run time, it's harder to determine which emscripten change is responsible as there is a long list of changes to go through.
- generally it would require a lot of non interrupted attention, with the risk of abandoning before we made it to fully work.
In any case this likely one of the most critical issues for pyodide at the moment, as it's blocking a number of other topics. Help would be very much appreciated.
cc @sconover @daicoden
Thanks to @dalcde the migration to emsdk 1.38.34 is now done.
When [updating to 1.38.35](https://github.com/iodide-project/pyodide/compare/master...rth:emsdk-1.38.35?expand=1) there another issue apparently, this time with loading of the main pyodide package. There are no errors but the the load with `languagePluginLoader` never completes (it keeps using CPU resources so maybe an infinite loop somewhere). I need to investigate this.
It used to be relatively straightforward to update the emsdk versions.
My experience with the update (I tried 1.38.37) is that it actually
hangs after some time, consuming no CPU resources (or at least very
little).
Upgrading to 1.38.38 results in another breakage during compile time,
where emconfigure doesn't recognize that certain math.h functions (e.g.
acos) are present.
The latter breakage involves a fatal error in the asm2wasm script (it
complains about a legitimate problem but didn't use to be fatal);
migrating to the llvm backend might help (there is another asm2wasm
breakage that occurs when updating binaryen).
> I tried 1.38.37) is that it actually
hangs after some time, consuming no CPU resources (or at least very
little).
Yes, indeed it's the same issue. I'm just saying that it first appeared in 1.38.35.
> migrating to the llvm backend might help
In previous attemps I think we needed at least 1.38.42 to do that, as for earlier versions there are some compilation issues that an only be solved by updating emsdk.
At the moment, as far as I can tell, dynamic linking together with EMULATE_FUNCTION_POINTER_CASTS is broken with the llvm backend, see https://github.com/emscripten-core/emscripten/issues/13076 . I don't think we would be able to progress until that is fixed.
@joemarshall Has reported some progress on this subject in https://gitter.im/Wasm-Python/community?at=5fe7057ac746c6431cd929eb It would very interesting to try to integrate some of [your changes](https://github.com/joemarshall/pyodide/commits/master) into pyodide.
Oh yeah, just getting round to looking at this. I have upstream 2.0.9 building and running on my repo https://github.com/joemarshall/pyodide now. There's a surprising amount of patching though, main things that needed fixed are:
1) use built in file packager (I see that's fixed in pyodide now)
2) change build to make .o files and .so files correctly
3) fix function signatures across scipy, numpy and clapack to be consistent. This is due partly to f2c handling subroutines by converting them into functions returning int 0, whereas some code in numpy and scipy expects them to return void, and also a bunch of code in scipy that uses implicit declarations that resolve differently to how they are declared. C doesn't care about this normally but in linking for wasm these are errors because the wasm stack machine checks arguments and return types.
Oh and the patch to binaryen so that fpcast emulation doesn't break dlopen and dlsym.
That's great @joemarshall !
> This is due partly to f2c handling subroutines by converting them into functions returning int 0, whereas some code in numpy and scipy expects them to return void,
Good to know, we will probably have to live with those patches since normal numpy/scipy builds don't use f2c
> also a bunch of code in scipy that uses implicit declarations that resolve differently to how they are declared. C doesn't care about this normally but in linking for wasm these are errors because the wasm stack machine checks arguments and return types
We could try to upstream those, assuming they apply to parts of the scipy code base that hasn't evolved much, since the version we use which is quite outdated.
From you comment on Gitter,
> Incidentally, do you know the current plan for switching across to upstream from fastcomp? Would an upstream branch make sense? Just that a lot of the fixes I did, like all the messing with clapack, numpy and scipy to get rid of conflicting function signatures and make .so files build right, i think they will break fastcomp builds.
yes, we could create an `upstream-llvm` branch and accept PRs there if it helps, then merge it (without squashing) into master. Though if there are some fixes that we can merge into master independently that would be good as well. WDYT @dalcde ?
> we will probably have to live with those patches since normal numpy/scipy builds don't use f2c
Actually it looks like it might be possible to use flang with the WASM target using LLVM 11+, https://github.com/iodide-project/pyodide/issues/184 so we might be able get rid of all f2c fixes and replace CLAPACK by BLAS + newer LAPACK. Though we still need a working build on master with the llvm backend as a starting point.
I've got the latest llvm and flang (that used to be f18) doesn't build code yet.
In theory old flang might support new enough llvm to build wasm, but it is 64 bit only right now.
--------------------
Note: I am currently part time, my working days are Wednesday to Friday. Responses may be delayed outside these times.
On 3 Jan 2021 17:03, Roman Yurchak <[email protected]> wrote:
we will probably have to live with those patches since normal numpy/scipy builds don't use f2c
Actually it looks like it might be possible to use flang with the WASM target using LLVM 11+, #184<https://github.com/iodide-project/pyodide/issues/184> so we might be able get rid of all f2c fixes and replace CLAPACK by BLAS + newer LAPACK. Though we still need a working build on master with the llvm backend as a starting point.
β
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub<https://github.com/iodide-project/pyodide/issues/476#issuecomment-753646499>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AAK6Y65O6VS7SL3MX7QZHYLSYCPOBANCNFSM4H2TQEJA>.
This message and any attachment are intended solely for the addressee
and may contain confidential information. If you have received this
message in error, please contact the sender and delete the email and
attachment.
Any views or opinions expressed by the author of this email do not
necessarily reflect the views of the University of Nottingham. Email
communications with the University of Nottingham may be monitored
where permitted by law.
And there's also a version of GCC with a wasm backend, which in theory should compile Fortran too. Not had time to play with it though.
Various repos relating to this on
https://github.com/pipcet
--------------------
Note: I am currently part time, my working days are Wednesday to Friday. Responses may be delayed outside these times.
On 3 Jan 2021 21:56, Joe Marshall <[email protected]> wrote:
I've got the latest llvm and flang (that used to be f18) doesn't build code yet.
In theory old flang might support new enough llvm to build wasm, but it is 64 bit only right now.
--------------------
Note: I am currently part time, my working days are Wednesday to Friday. Responses may be delayed outside these times.
On 3 Jan 2021 17:03, Roman Yurchak <[email protected]> wrote:
we will probably have to live with those patches since normal numpy/scipy builds don't use f2c
Actually it looks like it might be possible to use flang with the WASM target using LLVM 11+, #184<https://github.com/iodide-project/pyodide/issues/184> so we might be able get rid of all f2c fixes and replace CLAPACK by BLAS + newer LAPACK. Though we still need a working build on master with the llvm backend as a starting point.
β
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub<https://github.com/iodide-project/pyodide/issues/476#issuecomment-753646499>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AAK6Y65O6VS7SL3MX7QZHYLSYCPOBANCNFSM4H2TQEJA>.
This message and any attachment are intended solely for the addressee
and may contain confidential information. If you have received this
message in error, please contact the sender and delete the email and
attachment.
Any views or opinions expressed by the author of this email do not
necessarily reflect the views of the University of Nottingham. Email
communications with the University of Nottingham may be monitored
where permitted by law.
| 2021-01-10T09:40:41 |
pyodide/pyodide | 1,113 | pyodide__pyodide-1113 | [
"1110"
] | a48a1ffc03ac61a663dbe08d749a9912e12a8276 | diff --git a/conftest.py b/conftest.py
--- a/conftest.py
+++ b/conftest.py
@@ -108,7 +108,18 @@ def run_js(self, code):
Error.stackTraceLimit = Infinity;
let run = () => { %s }
try {
- return [0, run()]
+ let result = run();
+ if(pyodide && pyodide._module && pyodide._module._PyErr_Occurred()){
+ try {
+ pyodide._module._pythonexc2js();
+ } catch(e){
+ console.error(`Python exited with error flag set! Error was:\n{e.message}`);
+ // Don't put original error message in new one: we want
+ // "pytest.raises(xxx, match=msg)" to fail
+ throw new Error(`Python exited with error flag set!`);
+ }
+ }
+ return [0, result]
} catch (e) {
return [1, e.toString(), e.stack];
}
@@ -130,11 +141,22 @@ def run_js_async(self, code):
let cb = arguments[arguments.length - 1];
let run = async () => { %s }
(async () => {
- try {{
- cb([0, await run()]);
- }} catch (e) {{
+ try {
+ let result = await run();
+ if(pyodide && pyodide._module && pyodide._module._PyErr_Occurred()){
+ try {
+ pyodide._module._pythonexc2js();
+ } catch(e){
+ console.error(`Python exited with error flag set! Error was:\n{e.message}`);
+ // Don't put original error message in new one: we want
+ // "pytest.raises(xxx, match=msg)" to fail
+ throw new Error(`Python exited with error flag set!`);
+ }
+ }
+ cb([0, result]);
+ } catch (e) {
cb([1, e.toString(), e.stack]);
- }}
+ }
})()
"""
| diff --git a/packages/numpy/test_numpy.py b/packages/numpy/test_numpy.py
--- a/packages/numpy/test_numpy.py
+++ b/packages/numpy/test_numpy.py
@@ -113,14 +113,12 @@ def test_py2js_buffer_clear_error_flag(selenium):
selenium.load_package("numpy")
selenium.run("import numpy as np")
selenium.run("x = np.array([['string1', 'string2'], ['string3', 'string4']])")
- assert (
- selenium.run_js(
- """
- pyodide.globals.x
- return pyodide._module._PyErr_Occurred();
- """
- )
- == 0
+ selenium.run_js(
+ """
+ pyodide.globals.x
+ // Implicit assertion: this doesn't leave python error indicator set
+ // (automatically checked in conftest.py)
+ """
)
diff --git a/src/tests/test_typeconversions.py b/src/tests/test_typeconversions.py
--- a/src/tests/test_typeconversions.py
+++ b/src/tests/test_typeconversions.py
@@ -334,13 +334,21 @@ def test_memoryview_conversion(selenium):
"""
import array
a = array.array("Q", [1,2,3])
+ b = array.array("u", "123")
"""
)
selenium.run_js(
"""
pyodide.globals.a
- if(pyodide._module._PyErr_Occurred()){
- pyodide._module._pythonexc2js();
- }
+ // Implicit assertion: this doesn't leave python error indicator set
+ // (automatically checked in conftest.py)
+ """
+ )
+
+ selenium.run_js(
+ """
+ pyodide.globals.b
+ // Implicit assertion: this doesn't leave python error indicator set
+ // (automatically checked in conftest.py)
"""
)
| Soundness error in python2js_buffer
This bug goes back at least to version 0.15.0. It's been messing up my error handling code.
First:
```python
import numpy as np
x = np.array([['string1', 'string2'], ['string3', 'string4']])
```
Second: type `x<enter>1<enter>x<enter>1<enter>`.
| In v0.15.0 or v0.16.1 it first says:
```python
SystemError: <JsBoundMethod object at 0x4de7010> returned a result with an error set
Error: TypeError: Internal error: Invalid type to convert to array buffer.
The above exception was the direct cause of the following exception:
SystemError: <JsBoundMethod object at 0x4de7010> returned a result with an error set
string1,string2,string3,string4
```
Then on "1<enter>" it says:
```python
File "<console>", line 1
x
^
SyntaxError: multiple statements found while compiling a single statement
``` | 2021-01-11T03:49:26 |
pyodide/pyodide | 1,138 | pyodide__pyodide-1138 | [
"788",
"461"
] | c533b071ba35b403aa7b4f05c0263e3da8c94a41 | diff --git a/src/pyodide-py/pyodide/_core.py b/src/pyodide-py/pyodide/_core.py
--- a/src/pyodide-py/pyodide/_core.py
+++ b/src/pyodide-py/pyodide/_core.py
@@ -2,7 +2,7 @@
import platform
if platform.system() == "Emscripten":
- from _pyodide_core import JsProxy, JsBoundMethod, JsException
+ from _pyodide_core import JsProxy, JsMethod, JsException
else:
# Can add shims here if we are so inclined.
class JsException(Exception):
@@ -17,10 +17,10 @@ class JsProxy:
# Defined in jsproxy.c
- class JsBoundMethod:
+ class JsMethod:
"""A proxy to make it possible to call Javascript bound methods from Python."""
# Defined in jsproxy.c
-__all__ = [JsProxy, JsBoundMethod, JsException]
+__all__ = [JsProxy, JsMethod, JsException]
| diff --git a/src/tests/test_jsproxy.py b/src/tests/test_jsproxy.py
--- a/src/tests/test_jsproxy.py
+++ b/src/tests/test_jsproxy.py
@@ -7,29 +7,38 @@ def test_jsproxy_dir(selenium):
result = selenium.run_js(
"""
window.a = { x : 2, y : "9" };
+ window.b = function(){};
return pyodide.runPython(`
from js import a
- dir(a)
+ from js import b
+ [dir(a), dir(b)]
`);
"""
)
- assert set(result).issuperset(
+ jsproxy_items = set(
[
"__bool__",
- "__call__",
"__class__",
"__defineGetter__",
"__defineSetter__",
"__delattr__",
"__delitem__",
"constructor",
- "new",
"toString",
"typeof",
"valueOf",
- "x",
]
)
+ a_items = set(["x", "y"])
+ callable_items = set(["__call__", "new"])
+ set0 = set(result[0])
+ set1 = set(result[1])
+ assert set0.issuperset(jsproxy_items)
+ assert set0.isdisjoint(callable_items)
+ assert set0.issuperset(a_items)
+ assert set1.issuperset(jsproxy_items)
+ assert set1.issuperset(callable_items)
+ assert set1.isdisjoint(a_items)
def test_jsproxy_getattr(selenium):
@@ -245,7 +254,6 @@ def test_jsproxy_call_meth_js(selenium):
)
[email protected]
def test_jsproxy_call_meth_js_kwargs(selenium):
assert selenium.run_js(
"""
@@ -257,7 +265,8 @@ def test_jsproxy_call_meth_js_kwargs(selenium):
return pyodide.runPython(
`
from js import a
- a.f(y=10, x=2) == [a, x, y]
+ [r0, r1, r2] = a.f(y=10, x=2)
+ r0 == a and r1 == 2 and r2 == 10
`
);
"""
| Nested attribute access in JS->Python type conversion
Currently the following code fails,
```js
>>> from js import window
>>> window.URL.createObjectURL
Error: Traceback (most recent call last):
File "/lib/python3.7/site-packages/pyodide.py", line 45, in eval_code
return eval(compile(expr, '<eval>', mode='eval'), ns, ns)
File "<eval>", line 1, in <module>
AttributeError: 'JsBoundMethod' object has no attribute 'createObjectURL'
```
(while `window.URL.createObjectURL` is a valid JS object) because nested attributes (i.e. attribute of an attribute) don't seem to be supported. It would have been nice to make it work, though I have not looked at how difficult that would be.
from js import fetch treats fetch as a free function
`fetch` is a member function of `window`.
However, using `from js import fetch` doesn't realize that and leads to the error:
`TypeError: 'fetch' called on an object that does not implement interface Window.`
For Reproducing the Error:
```
%%py
from js import document, Request, fetch, URL
img_tag = document.createElement('img')
req = Request.new('https://i.ibb.co/3f4yJQS/face4.jpg')
def func(response):
return response.blob()
def func2(blob):
objURL = URL.createObjectURL(blob)
img_tag.src = objURL
fetch(req).then(func).then(func2)
document.body.appendChild(img_tag)
```
| Ah this is another issue with the typeconversions code! I'll try to fix it.
The issue here is a bad interaction between `pyproxy.c` and `jsproxy.c`. In particular, if you index a `jsproxy` and the value of the field is a `pyproxy` then instead of returning the wrapped python object `jsproxy` returns a `JsBoundMethod`. It does this because the `pyproxy` is a function. Why is it a function? Well it might implement `__call__` and javascript throws an error if you try to call something that is not a function. So the `PyProxy` is a function.
https://github.com/iodide-project/pyodide/blob/41592637e6a5231d1fa74235ec5f46919c18e25d/src/type_conversion/pyproxy.c#L144
Then `JsProxy` should check first whether the function is a `PyProxy` or not before just blindly returning a `JsBoundMethod`.
https://github.com/iodide-project/pyodide/blob/master/src/type_conversion/jsproxy.c#L70
I can submit a PR to fix this, though this is code that I plan to rewrite much more drastically soon.
| 2021-01-15T02:56:59 |
pyodide/pyodide | 1,141 | pyodide__pyodide-1141 | [
"875"
] | 3e2e4960add2f2c76bfb478bc88818837e9f11ac | diff --git a/src/pyodide-py/pyodide/console.py b/src/pyodide-py/pyodide/console.py
--- a/src/pyodide-py/pyodide/console.py
+++ b/src/pyodide-py/pyodide/console.py
@@ -1,8 +1,40 @@
-from typing import Optional, Callable
+from typing import Optional, Callable, Any
import code
import io
import sys
import platform
+from contextlib import contextmanager
+import builtins
+
+# this import can fail when we are outside a browser (e.g. for tests)
+try:
+ import js
+
+ _dummy_promise = js.Promise.resolve()
+ _load_packages_from_imports = js.pyodide.loadPackagesFromImports
+
+except ImportError:
+
+ class _FakePromise:
+ """A promise that mimic the JS promises.
+
+ Only `then is supported` and there is no asynchronicity.
+ execution occurs when then is call.
+
+ This is mainly to fake `load_packages_from_imports`
+ and `InteractiveConsole.run_complete` in contexts
+ where JS promises are not available (tests)."""
+
+ def __init__(self, args=None):
+ self.args = (args,) if args is not None else ()
+
+ def then(self, func, *args):
+ return _FakePromise(func(*self.args))
+
+ _dummy_promise = _FakePromise()
+
+ def _load_packages_from_imports(*args):
+ return _dummy_promise
__all__ = ["InteractiveConsole"]
@@ -62,7 +94,9 @@ class InteractiveConsole(code.InteractiveConsole):
"""Interactive Pyodide console
Base implementation for an interactive console that manages
- stdout/stderr redirection.
+ stdout/stderr redirection. Since packages are loaded before running
+ code, `runcode` returns a JS promise. Override `sys.displayhook` to
+ catch the result of an execution.
`self.stdout_callback` and `self.stderr_callback` can be overloaded.
@@ -91,12 +125,16 @@ def __init__(
self._stderr = None
self.stdout_callback = stdout_callback
self.stderr_callback = stderr_callback
- self._persistent_stream_redirection = persistent_stream_redirection
- if self._persistent_stream_redirection:
+ self._streams_redirected = False
+ if persistent_stream_redirection:
self.redirect_stdstreams()
+ self.run_complete = _dummy_promise
def redirect_stdstreams(self):
""" Toggle stdout/stderr redirections. """
+ # already redirected?
+ if self._streams_redirected:
+ return
if self._stdout is None:
# we use meta callbacks to allow self.std{out,err}_callback
@@ -127,27 +165,126 @@ def meta_stderr_callback(*args):
# actual redirection
sys.stdout = self._stdout
sys.stderr = self._stderr
+ self._streams_redirected = True
def restore_stdstreams(self):
"""Restore stdout/stderr to the value it was before
the creation of the object."""
- sys.stdout = self._old_stdout
- sys.stderr = self._old_stderr
-
- def runcode(self, code):
- if self._persistent_stream_redirection:
- super().runcode(code)
+ if self._streams_redirected:
+ sys.stdout = self._old_stdout
+ sys.stderr = self._old_stderr
+ self._streams_redirected = False
+
+ @contextmanager
+ def stdstreams_redirections(self):
+ """Ensure std stream redirection.
+
+ This supports nesting."""
+ if self._streams_redirected:
+ yield
else:
self.redirect_stdstreams()
- super().runcode(code)
+ yield
self.restore_stdstreams()
+ def flush_all(self):
+ """ Force stdout/stderr flush. """
+ with self.stdstreams_redirections():
+ sys.stdout.flush()
+ sys.stderr.flush()
+
+ def runsource(self, *args, **kwargs):
+ """Force streams redirection.
+
+ Syntax errors are not thrown by runcode but here in runsource.
+ This is why we force redirection here since doing twice
+ is not an issue."""
+
+ with self.stdstreams_redirections():
+ return super().runsource(*args, **kwargs)
+
+ def runcode(self, code):
+ """Load imported packages then run code, async.
+
+ To achieve nice result representation, the interactive console
+ is fully implemented in Python. This has a major drawback:
+ packages should be loaded from here. This is why this
+ function sets the promise `self.run_complete`.
+ If you need to wait for the end of the computation,
+ you should await for it."""
+ parent_runcode = super().runcode
+ source = "\n".join(self.buffer)
+
+ def load_packages_and_run(*args):
+ def run(*args):
+ with self.stdstreams_redirections():
+ parent_runcode(code)
+ # in CPython's REPL, flush is performed
+ # by input(prompt) at each new prompt ;
+ # since we are not using input, we force
+ # flushing here
+ self.flush_all()
+ return _dummy_promise
+
+ return _load_packages_from_imports(source).then(run)
+
+ self.run_complete = self.run_complete.then(load_packages_and_run)
+
def __del__(self):
- if self._persistent_stream_redirection:
- self.restore_stdstreams()
+ self.restore_stdstreams()
def banner(self):
""" A banner similar to the one printed by the real Python interpreter. """
# copyied from https://github.com/python/cpython/blob/799f8489d418b7f9207d333eac38214931bd7dcc/Lib/code.py#L214
cprt = 'Type "help", "copyright", "credits" or "license" for more information.'
- return f"Python {platform.python_version()} {platform.python_build()} on WebAssembly VM\n{cprt}"
+ version = platform.python_version()
+ build = f"({', '.join(platform.python_build())})"
+ return f"Python {version} {build} on WebAssembly VM\n{cprt}"
+
+
+def repr_shorten(
+ value: Any, limit: int = 1000, split: Optional[int] = None, separator: str = "..."
+):
+ """Compute the string representation of `value` and shorten it
+ if necessary.
+
+ If it is longer than `limit` then return the firsts `split`
+ characters and the last `split` characters seperated by '...'.
+ Default value for `split` is `limit // 2`.
+ """
+ if split is None:
+ split = limit // 2
+ text = repr(value)
+ if len(text) > limit:
+ text = f"{text[:split]}{separator}{text[-split:]}"
+ return text
+
+
+def displayhook(value, repr: Callable[[Any], str]):
+ """A displayhook with custom `repr` function.
+
+ It is intendend to overload `sys.displayhook`. Note that monkeypatch
+ `builtins.repr` does not work in `sys.displayhook`. The pointer to
+ `repr` seems hardcoded in default `sys.displayhook` version
+ (which is written in C)."""
+ # from https://docs.python.org/3/library/sys.html#sys.displayhook
+ # If value is not None, this function prints repr(value) to
+ # sys.stdout, and saves value in builtins._. If repr(value) is not
+ # encodable to sys.stdout.encoding with sys.stdout.errors error
+ # handler (which is probably 'strict'), encode it to
+ # sys.stdout.encoding with 'backslashreplace' error handler.
+ if value is None:
+ return
+ builtins._ = None # type: ignore
+ text = repr(value)
+ try:
+ sys.stdout.write(text)
+ except UnicodeEncodeError:
+ bytes = text.encode(sys.stdout.encoding, "backslashreplace")
+ if hasattr(sys.stdout, "buffer"):
+ sys.stdout.buffer.write(bytes)
+ else:
+ text = bytes.decode(sys.stdout.encoding, "strict")
+ sys.stdout.write(text)
+ sys.stdout.write("\n")
+ builtins._ = value # type: ignore
| diff --git a/src/tests/test_console.py b/src/tests/test_console.py
--- a/src/tests/test_console.py
+++ b/src/tests/test_console.py
@@ -1,6 +1,7 @@
import pytest
from pathlib import Path
import sys
+import io
sys.path.append(str(Path(__file__).parents[2] / "src" / "pyodide-py"))
@@ -23,15 +24,13 @@ def callback(string):
@pytest.fixture
-def safe_stdstreams():
- stdout = sys.stdout
- stderr = sys.stderr
+def safe_sys_redirections():
+ redirected = sys.stdout, sys.stderr, sys.displayhook
yield
- sys.stdout = stdout
- sys.stderr = stderr
+ sys.stdout, sys.stderr, sys.displayhook = redirected
-def test_interactive_console_streams(safe_stdstreams):
+def test_interactive_console_streams(safe_sys_redirections):
my_stdout = ""
my_stderr = ""
@@ -66,6 +65,12 @@ def stderr_callback(string):
shell.push("print('foobar')")
assert my_stdout == "foo\nfoobar\n"
+ shell.push("print('foobar')")
+ assert my_stdout == "foo\nfoobar\nfoobar\n"
+
+ shell.push("1+1")
+ assert my_stdout == "foo\nfoobar\nfoobar\n2\n"
+
shell.restore_stdstreams()
my_stdout = ""
@@ -94,3 +99,91 @@ def stderr_callback(string):
print("bar")
assert my_stdout == "foobar\n"
+
+ shell.push("print('foobar')")
+ assert my_stdout == "foobar\nfoobar\n"
+
+ shell.push("import sys")
+ shell.push("print('foobar', file=sys.stderr)")
+ assert my_stderr == "foobar\n"
+
+ shell.push("1+1")
+ assert my_stdout == "foobar\nfoobar\n2\n"
+
+
+def test_repr(safe_sys_redirections):
+ sep = "..."
+ for string in ("x" * 10 ** 5, "x" * (10 ** 5 + 1)):
+ for limit in (9, 10, 100, 101):
+ assert len(
+ console.repr_shorten(string, limit=limit, separator=sep)
+ ) == 2 * (limit // 2) + len(sep)
+
+ sys.stdout = io.StringIO()
+ console.displayhook(
+ [0] * 100, lambda v: console.repr_shorten(v, 100, separator=sep)
+ )
+ assert len(sys.stdout.getvalue()) == 100 + len(sep) + 1 # for \n
+
+
[email protected]
+def safe_selenium_sys_redirections(selenium):
+ selenium.run("_redirected = sys.stdout, sys.stderr, sys.displayhook")
+ yield
+ selenium.run("sys.stdout, sys.stderr, sys.displayhook = _redirected")
+
+
+def test_interactive_console(selenium, safe_selenium_sys_redirections):
+ def ensure_run_completed():
+ selenium.driver.execute_async_script(
+ """
+ const done = arguments[arguments.length - 1];
+ pyodide.globals.shell.run_complete.then(done);
+ """
+ )
+
+ selenium.run(
+ """
+ from pyodide.console import InteractiveConsole
+
+ result = None
+
+ def displayhook(value):
+ global result
+ result = value
+
+ shell = InteractiveConsole()
+ sys.displayhook = displayhook"""
+ )
+
+ selenium.run("shell.push('x = 5')")
+ selenium.run("shell.push('x')")
+ ensure_run_completed()
+ assert selenium.run("result") == 5
+
+ selenium.run("shell.push('x ** 2')")
+ ensure_run_completed()
+ assert selenium.run("result") == 25
+
+ selenium.run("shell.push('def f(x):')")
+ selenium.run("shell.push(' return x*x + 1')")
+ selenium.run("shell.push('')")
+ selenium.run("shell.push('[f(x) for x in range(5)]')")
+ ensure_run_completed()
+ assert selenium.run("result") == [1, 2, 5, 10, 17]
+
+ selenium.run("shell.push('def factorial(n):')")
+ selenium.run("shell.push(' if n < 2:')")
+ selenium.run("shell.push(' return 1')")
+ selenium.run("shell.push(' else:')")
+ selenium.run("shell.push(' return n * factorial(n - 1)')")
+ selenium.run("shell.push('')")
+ selenium.run("shell.push('factorial(10)')")
+ ensure_run_completed()
+ assert selenium.run("result") == 3628800
+
+ # with package load
+ selenium.run("shell.push('import pytz')")
+ selenium.run("shell.push('pytz.utc.zone')")
+ ensure_run_completed()
+ assert selenium.run("result") == "UTC"
| Improving console.html
`console.html`is a great tool. It's a showcase for Pyodide and [the devel version](https://pyodide-cdn2.iodide.io/dev/full/console.html) allows developers to track issues and make quick tests.
Unfortunately, IMHO, it lacks some important features like:
1. sync behavior (do not set next prompt until current command finished, e.g. with imports)
2. full stdout/stderr support (e.g. import warnings are not visible)
3. correct result representation with `repr` (type `"1"` and it shows `1`, type `globals()` and it shows `[object Object]`)
While 1. and 2. are addressed in this PR, 3. seems tricky, at least with the current setup (JQuery's terminal + `copy.py` interactive console + `runPythonAscyn`) but not impossible (see [Basthon](https://basthon.fr)).
Getting correct representations for basic types (like strings) are addressed in this PR. Difficulty to get correct `repr` for complex objects comes from the fact that, when you get a Python object on the JS side (e.g. runPython[Async]'s result), it seems impossible to get it back into Python. As an illustration, compare the result of the two following code snippets:
```javascript
// returns "[object Object]"
pyodide.runPython("import js ; js.window.x = {1: 2} ; repr(js.window.x)")
// returns "{1: 2}"
pyodide.runPython("x = {1: 2} ; repr(x)")
```
Long story short,
```javascript
pyodide.runPython("repr({1: 2})")
// is not equivalent to:
pyodide.pyimport("repr")(pyodide.runPython("{1: 2}"))
```
Any idea to fully address 3.with the current setup?
Should I edit changelog ?
Closes https://github.com/iodide-project/pyodide/issues/897
| Thanks a lot for improving `console.html` @casatir !
For the point 3 could you open an issue about it? I think it's a general issue with py β· JS conversions, possibly related to https://github.com/iodide-project/pyodide/issues/780.
Yes, please add a changelog entry with a link to this PR.
Hello @casatir, thank you for your contribution.
I'm also of the same opinion, the REPL is a very important place to start with. I've just compiled and tested your branch, it runs well, but I found about another problem:
```python
s = input()
```
shows a JavaScript prompt() in 0.15.0 (anyway, I doesn't return anything), but now yields in an error
```
>>> s = input()
Error: Traceback (most recent call last):
File "/lib/python3.8/site-packages/pyodide.py", line 66, in eval_code
exec(compile(mod, "<exec>", mode="exec"), ns, ns)
File "<exec>", line 1, in <module>
ValueError: I/O operation on closed file.
```
Is there a way to resolve this? Please do also update the CHANGELOG.md.
I will update the changelog and put console stuff in `pyodide.py`.
Are you sure about the `input` issue? My tests shows that it works like in 0.15.0...
> Are you sure about the `input` issue? My tests shows that it works like in 0.15.0...
you're right... I've just built it again and it works. Then forget about it.
Here is an attempt to fix #897 . Seems OK but namespace is polluted by imports and variables from the REPL code. Not tab completion on filepath (not supported by `Jedi` from what I understand).
> Here is an attempt to fix #897 . Seems OK but namespace is polluted by imports and variables from the REPL code. Not tab completion on filepath (not supported by Jedi from what I understand).
Great thanks! We can look into improving that situation in a separate PR.
> Answered to #896 where your proposal makes sense. However, should this be blocking for this PR?
I'll try to make a PR to create that package soon, but it's not a blocker. However we need tests for those classes to merge this PR, which for now probably means putting them into `src/pyodide.py`.
I think I have a working console with:
- sync behavior
- full stdout/stderr support
- correct result representation
- clean namespace
- integration in a new environment made easy
- tests
It is fully implemented in Python and demonstrates the Pyodide ability to interpret itself.
That's a lot of changes that I'll split into several PR.
See you soon! | 2021-01-15T13:54:46 |
pyodide/pyodide | 1,146 | pyodide__pyodide-1146 | [
"960"
] | 293fefa7dffaa6f901e1562931085b7c844b1832 | diff --git a/src/pyodide-py/pyodide/__init__.py b/src/pyodide-py/pyodide/__init__.py
--- a/src/pyodide-py/pyodide/__init__.py
+++ b/src/pyodide-py/pyodide/__init__.py
@@ -1,5 +1,12 @@
from ._base import open_url, eval_code, find_imports, as_nested_list
from ._core import JsException # type: ignore
+from ._importhooks import JsFinder
+import sys
+
+jsfinder = JsFinder()
+register_js_module = jsfinder.register_js_module
+unregister_js_module = jsfinder.unregister_js_module
+sys.meta_path.append(jsfinder) # type: ignore
__version__ = "0.16.1"
@@ -10,4 +17,6 @@
"find_imports",
"as_nested_list",
"JsException",
+ "register_js_module",
+ "unregister_js_module",
]
diff --git a/src/pyodide-py/pyodide/_importhooks.py b/src/pyodide-py/pyodide/_importhooks.py
new file mode 100644
--- /dev/null
+++ b/src/pyodide-py/pyodide/_importhooks.py
@@ -0,0 +1,96 @@
+from ._core import JsProxy
+from importlib.abc import MetaPathFinder, Loader
+from importlib.util import spec_from_loader
+import sys
+
+
+class JsFinder(MetaPathFinder):
+ def __init__(self):
+ self.jsproxies = {}
+
+ def find_spec(self, fullname, path, target=None):
+ [parent, _, child] = fullname.rpartition(".")
+ if parent:
+ parent_module = sys.modules[parent]
+ if not isinstance(parent_module, JsProxy):
+ # Not one of us.
+ return None
+ try:
+ jsproxy = getattr(parent_module, child)
+ except AttributeError:
+ raise ModuleNotFoundError(
+ f"No module named {fullname!r}", name=fullname
+ ) from None
+ if not isinstance(jsproxy, JsProxy):
+ raise ModuleNotFoundError(
+ f"No module named {fullname!r}", name=fullname
+ )
+ else:
+ try:
+ jsproxy = self.jsproxies[fullname]
+ except KeyError:
+ return None
+ loader = JsLoader(jsproxy)
+ return spec_from_loader(fullname, loader, origin="javascript")
+
+ def register_js_module(self, name, jsproxy):
+ """
+ Registers the Js object ``module`` as a Js module with ``name``. The module
+ can then be imported from Python using the standard Python import system.
+ If another module by the same name has already been imported, this won't
+ have much effect unless you also delete the imported module from
+ ``sys.modules``. This is called by the javascript API
+ ``pyodide.registerJsModule``.
+
+ Parameters
+ ----------
+ name : str
+ Name of js module
+ jsproxy : JsProxy
+ Javascript object backing the module
+ """
+ if not isinstance(name, str):
+ raise TypeError(
+ f"Argument 'name' must be a str, not {type(name).__name__!r}"
+ )
+ if not isinstance(jsproxy, JsProxy):
+ raise TypeError(
+ f"Argument 'jsproxy' must be a JsProxy, not {type(jsproxy).__name__!r}"
+ )
+ self.jsproxies[name] = jsproxy
+
+ def unregister_js_module(self, name):
+ """
+ Unregisters a Js module with given name that has been previously registered
+ with `js_api_pyodide_registerJsModule` or ``pyodide.register_js_module``. If
+ a Js module with that name does not already exist, will raise an error. Note
+ that if the module has already been imported, this won't have much effect
+ unless you also delete the imported module from ``sys.modules``. This is
+ called by the javascript API ``pyodide.unregisterJsModule``.
+
+ Parameters
+ ----------
+ name : str
+ Name of js module
+ """
+ try:
+ del self.jsproxies[name]
+ except KeyError:
+ raise ValueError(
+ f"Cannot unregister {name!r}: no javascript module with that name is registered"
+ ) from None
+
+
+class JsLoader(Loader):
+ def __init__(self, jsproxy):
+ self.jsproxy = jsproxy
+
+ def create_module(self, spec):
+ return self.jsproxy
+
+ def exec_module(self, module):
+ pass
+
+ # used by importlib.util.spec_from_loader
+ def is_package(self, fullname):
+ return True
| diff --git a/src/tests/test_jsproxy.py b/src/tests/test_jsproxy.py
--- a/src/tests/test_jsproxy.py
+++ b/src/tests/test_jsproxy.py
@@ -471,3 +471,158 @@ def test_window_isnt_super_weird_anymore():
assert js.window.Array == Array
assert js.window.window.window.window == window
assert window.window.window.window.Array == Array
+
+
+def test_mount_object(selenium):
+ result = selenium.run_js(
+ """
+ function x1(){
+ return "x1";
+ }
+ function x2(){
+ return "x2";
+ }
+ function y(){
+ return "y";
+ }
+ let a = { x : x1, y, s : 3, t : 7};
+ let b = { x : x2, y, u : 3, t : 7};
+ pyodide.registerJsModule("a", a);
+ pyodide.registerJsModule("b", b);
+ return pyodide.runPython(`
+ from a import x
+ from b import x as x2
+ result = [x(), x2()]
+ import a
+ import b
+ result += [a.s, dir(a), dir(b)]
+ result
+ `)
+ """
+ )
+ assert result[:3] == ["x1", "x2", 3]
+ assert set([x for x in result[3] if len(x) == 1]) == set(["x", "y", "s", "t"])
+ assert set([x for x in result[4] if len(x) == 1]) == set(["x", "y", "u", "t"])
+
+
[email protected]
+def test_mount_map(selenium):
+ result = selenium.run_js(
+ """
+ function x1(){
+ return "x1";
+ }
+ function x2(){
+ return "x2";
+ }
+ function y(){
+ return "y";
+ }
+ let a = new Map(Object.entries({ x : x1, y, s : 3, t : 7}));
+ let b = new Map(Object.entries({ x : x2, y, u : 3, t : 7}));
+ pyodide.registerJsModule("a", a);
+ pyodide.registerJsModule("b", b);
+ return pyodide.runPython(`
+ from a import x
+ from b import x as x2
+ result = [x(), x2()]
+ import a
+ import b
+ result += [a.s, dir(a), dir(b)]
+ import sys
+ del sys.modules["a"]
+ del sys.modules["b"]
+ result
+ `)
+ """
+ )
+ assert result[:3] == ["x1", "x2", 3]
+ # fmt: off
+ assert set(result[3]).issuperset(
+ [
+ "x", "y", "s", "t",
+ "__dir__", "__doc__", "__getattr__", "__loader__",
+ "__name__", "__package__", "__spec__",
+ "jsproxy",
+ ]
+ )
+ # fmt: on
+ assert set(result[4]).issuperset(["x", "y", "u", "t", "jsproxy"])
+
+
+def test_unregister_jsmodule(selenium):
+ selenium.run_js(
+ """
+ let a = new Map(Object.entries({ s : 7 }));
+ let b = new Map(Object.entries({ t : 3 }));
+ pyodide.registerJsModule("a", a);
+ pyodide.registerJsModule("a", b);
+ pyodide.unregisterJsModule("a")
+ pyodide.runPython(`
+ try:
+ import a
+ assert False
+ except ImportError:
+ pass
+ `)
+ """
+ )
+
+
+def test_unregister_jsmodule_error(selenium):
+ selenium.run_js(
+ """
+ try {
+ pyodide.unregisterJsModule("doesnotexist");
+ throw new Error("unregisterJsModule should have thrown an error.");
+ } catch(e){
+ if(!e.message.includes("Cannot unregister 'doesnotexist': no javascript module with that name is registered")){
+ throw e;
+ }
+ }
+ """
+ )
+
+
+def test_nested_import(selenium):
+ assert (
+ selenium.run_js(
+ """
+ window.a = { b : { c : { d : 2 } } };
+ return pyodide.runPython("from js.a.b import c; c.d");
+ """
+ )
+ == 2
+ )
+
+
+def test_register_jsmodule_docs_example(selenium):
+ selenium.run_js(
+ """
+ let my_module = {
+ f : function(x){
+ return x*x + 1;
+ },
+ g : function(x){
+ console.log(`Calling g on argument ${x}`);
+ return x;
+ },
+ submodule : {
+ h : function(x) {
+ return x*x - 1;
+ },
+ c : 2,
+ },
+ };
+ pyodide.registerJsModule("my_js_module", my_module);
+ """
+ )
+ selenium.run(
+ """
+ import my_js_module
+ from my_js_module.submodule import h, c
+ assert my_js_module.f(7) == 50
+ assert h(9) == 80
+ assert c == 2
+ """
+ )
| Configurable "import js"
Add a `pyodide.mountPackage("name", object)` api that mounts `object` at `name`, so that `import "name"` imports `object`, and `from name import field` imports `object.field` as `field`.
For example, the current `js` import would then be installed by saying `pyodide.mountPackage("js", globalThis)` though we would keep the current behavior as the default. We could get rid of `import js` by saying `pyodide.mountPackage("js", undefined)`.
| sounds great.
BTW, have any way to bannd other import ? example about the `micropip` ?
Example, a bool config to enable or disable the `micropip` can or cannot be use ?
You can do that yourself by [using Emscripten FS](https://emscripten.org/docs/api_reference/Filesystem-API.html?highlight=fs#FS.unlink) to delete the `micropip` package from the dummy file system. The filesystem is found either under `pyodide.FS` at start of load or under `pyodide._module.FS` once load is completed.
If you want, you can open another issue requesting that feature.
@hoodmane Oh, thank you~
is the `pyodide` package can be delete same as that ?
`micropip` is a pure python package, so there isn't much point in "banning" it; users can simply re-implement it. Further, micropip is only useful for installing pure python packages, and again banning those won't do you much good either. Just by having python around you essentially have full access to the virtual filesystem to insert whatever you want, including compiled C code, by importing `os`.
Thus, I believe any sort of sandboxing along these lines ought to come from restricting along the JS-wasm boundary, and nothing within python itself.
`micropip is a pure python package, so there isn't much point in "banning" it;`
so, this means that i need bannd the HTTP api, not the package ?
so, where is the network API ?
> Thus, I believe any sort of sandboxing along these lines ought to come from restricting along the JS-wasm boundary, and nothing within python itself.
yes , you are right !!!
>so, this means that i need bannd the HTTP api, not the package ?
>
>so, where is the network API ?
I believe we use XMLHTTPRequest imported from JavaScript
> I believe we use XMLHTTPRequest imported from JavaScript
so, how to disable it ? example, make a config to disable the import ?
At the moment, we use XMLHTTPRequest via the js module, which currently allows arbitrary js execution. I think a reasonable approach would be to make our network requests use the emscripten fetch api, so that it can be used without the js module, then make the js module a configurable compile time option.
if change to `use the emscripten fetch api` , please let it become disable feature .
@dalcde I don't think it's necessary even to have the js module be a compile time option. Inside `pyodide.js` we can do `mountPackage("js", globalThis)` and then if you don't want that then you can edit `pyodide.js` to remove that. So you wouldn't have to build a separate wasm binary.
I don't think this would work from a security perspective. The functions needed to enable to JS module are already imported into wasm. We need to disable it during compile time so that emscripten does not do the import. Again, users can re-insert the files into the emscripten virtual fs.
@dalcde I thought the point was that the wasm can call any javascript function it has a reference to, but if you don't ever give it a reference then it can't call it?
That is true, but the "reference" is the literal string name of the imported function. When instantiating a wasm module, you supply an object consisting of imports the wasm module is allowed to access, and the wasm module uses the imports by referring to its name (see https://developer.mozilla.org/en-US/docs/WebAssembly/Understanding_the_text_format#Importing_functions_from_JavaScript )
Thus, any import we pass on to a wasm module will be usable, and the list of imports is determined by emscripten at compile time.
Have you checked [RistrictedPython](https://restrictedpython.readthedocs.io/en/latest/)? Its a pure python package and allows applying custom restrictions to the import function.
@oeway
thanks for your notice ~
---
but you can read in there https://restrictedpython.readthedocs.io/en/latest/idea.html
> There should be additional preventive measures taken to ensure integrity of the application and the server itself, according to information security best practice and unrelated to RestrictedPython.
so, make a sandboxied runtime / limit avaiable import package are still nesssary.
---
but the `RestrictedPython` seems like still be a good tools to help make a safe OJ runtime.
> but you can read in there
> https://restrictedpython.readthedocs.io/en/latest/idea.html
>
> There should be additional preventive measures taken to ensure integrity
> of the application and the server itself, according to information security
> best practice and unrelated to RestrictedPython.
>
> so, make a sandboxied runtime / limit avaiable import package are still
> nesssary.
Well, ensuring the integrity typically means you need to make
sure the python runtime , module or scripts are downloaded from
reliable source, with hash value verified etc., I donβt think they mean you
need additional sandbox mechanism.
However, they also mentioned:
> RestrictedPython is not a sandbox system or a secured environment, but it helps to define a trusted environment and execute untrusted code inside of it.
Since we have the browser itself as a sandbox system, maybe having this additional restriction layer will be enough for many such applications?
If you read here:
https://restrictedpython.readthedocs.io/en/latest/usage/policy.html#implementing-a-policy
they let you implement your own import policy to limit packages.
If I understand correctly, the goal of this issue is to provide some restrictions on accessing js, if we lift the restriction already in pyodide (either in the JS-wasm boundary or through `mountPackage("js", globalThis)`), many packages patched via js won't work (e.g. a web loop for asyncio, or a http client module that calls js.fetch).
In my opinion, using `RestrictedPython` to create a restricted environment inside Python with a set of custom policy is much more flexible.
> Well, ensuring the integrity typically means you need to make sure the python runtime , module or scripts are downloaded from reliable source, with hash value verified .
but the code maybe come from a untrusted user.
please see [this ](https://github.com/iodide-project/pyodide/issues/959#issuecomment-751421344) , and [this](https://github.com/iodide-project/pyodide/issues/959#issuecomment-751424230) and [this](https://github.com/iodide-project/pyodide/issues/959#issuecomment-751429757) .
---
Python is a dynamic language, and it can be patch everywhere, no one understand where will be the next sandbox escape vulnerability .
so i think the configurable `import` or a configurable runtime is need to build a great sandbox , make something configurable in the JS-wasm boundary is a best choose .
and i think that , if we have `mountPackage("js", globalThis)` and `unmountPackage("js")` , between two `runPython` call, the `many packages patched via js won't work (e.g. a web loop for asyncio, or a http client module that calls js.fetch).` maybe not be a issue. also, the `RestrictedPython` still a necessary tools to make the seconed guard wall .
> but the code maybe come from a untrusted user.
Yes, but you can run untrusted code within RestrictedPython, as it advertised **it helps to define a trusted environment and execute untrusted code inside of it**. In other words, you don't need to do anything within the python runtime, and unstrusted code can run directly within RestrictedPython. It is designed to run untrusted code on the server-side (e.g. with zope which is a web app server) in a native Python environment (possibly without additional protection).
>
> please see [this ](https://github.com/iodide-project/pyodide/issues/959#issuecomment-751421344) , and [this](https://github.com/iodide-project/pyodide/issues/959#issuecomment-751424230) and [this](https://github.com/iodide-project/pyodide/issues/959#issuecomment-751429757) .
I tried to read the thread, but I got confused as well. @dalcde thinks this might be useful when Pyodide is run in a server which totally make sense to me and I think we can then use RestrictedPython. My feeling is that RestrictedPython is a more thorough solution.
However, it seems you are trying to run Pyodide on the client-side which I don't think any of these will work-- as @hoodmane pointed out, an evil user can always open a browser console and run js code to remove any restrictions you applied.
If you assume the user cannot access the browser console, then you can simply assume all the code can be executed by RestrictedPython where you can configure any import restrictions, including blacklist the js module and potentially much more, no?
> i think that , if we have mountPackage("js", globalThis) and unmountPackage("js") , between two runPython call, the `many packages patched via js won't work (e.g. a web loop for asyncio, or a http client module that calls js.fetch). ` maybe not be a issue. also, the RestrictedPython still a necessary tools to make the seconed guard wall .
Unfortunately, many js patches will be applied during runtime, not all of them import everything when the package is loaded.
@oeway If you want to argue that it's impossible to do what @Lyoko-Jeremie is doing, can you please do it on the original thread #959?
(What @Lyoko-Jeremie is doing is code obfuscation, which is impossible in theory but possible to do well enough for many practical purposes. If people care to break it they will eventually do so though.)
@oeway The reason I suggested this feature is because I think it is easy to implement (looking at jsimport.c it's a very small change) and it adds useful flexibility. In particular, I'm less interested in unmounting `js` as @Lyoko-Jeremie wants to do, and more interested in being able to set up several different javascript implemented "packages".
> Thus, any import we pass on to a wasm module will be usable, and the list of imports is determined by emscripten at compile time.
This is impossible. It is possible to pass a wasm function a js callback defined at runtime. The reference to the js callback is some sort of pointer. The compiler can't know at compile time what arguments it will be called with at runtime.
> @oeway If you want to argue that it's impossible to do what @Lyoko-Jeremie is doing, can you please do it on the original thread #959?
OK, It's indeed related to that, but I meant for proposing using RestrictedPython instead of switching to the fetch api of emscripten and apply the restrictions to the js-wasm interface.
> The reason I suggested this feature is because I think it is easy to implement (looking at jsimport.c it's a very small change) and it adds useful flexibility. In particular, I'm less interested in unmounting js as @Lyoko-Jeremie wants to do, and more interested in being able to set up several different javascript implemented "packages".
I see, if that's easy enough, totally make sense to me. Bing able to mountPackage js packages are certainly interesting to me, I am actually thinking along that, for example importing tensorflow.js. For that I will need to also patch the python to js and js to python data type conversion though.
> proposing using RestrictedPython
Still seems like it's more on topic in the original thread.
> I will need to also patch the python to js and js to python data type conversion though.
This sounds interesting. I opened another issue #964 to discuss this.
>This is impossible. It is possible to pass a wasm function a js callback defined at runtime. The reference to the js callback is some sort of pointer. The compiler can't know at compile time what arguments it will be called with at runtime.
Fair point. You can add functions to the table at runtime. However, the wasm imports used by the js module lets you access javascript globals, hence window.eval. Thus, we already have full access to the javascript runtime before we add any new functions at runtime.
> for example importing tensorflow.js. For that I will need to also patch the python to js and js to python data type conversion though.
about this, i have a similar idea in my brain , the official Python version OpenCV seems like dependence the OpenCV binary runtime, try to port it into pyodide maybe wast many proof. but we have a [OpenCV.js](https://docs.opencv.org/3.4/d5/d10/tutorial_js_root.html) , and now it now on dev, you can setup it [follow this](https://docs.opencv.org/3.4/d0/d84/tutorial_js_usage.html) , or [download opencv.js from there](https://docs.opencv.org/3.4.0/opencv.js) .
so , we can simple make the Python version OpenCV running in Brower use OpenCV.js and a simple Python Wrapper .
> However, the wasm imports used by the js module lets you access javascript globals, hence window.eval. Thus, we already have full access to the javascript runtime before we add any new functions at runtime.
so we need a way to limit the `import js` or disable it like the topic title, right ?
I'm working on a PR for this.
I think as a first step we could make disabling js disable network requests too. We can later re-enable it by hand, using the fetch API or otherwise
> make disabling js disable network requests too
Is disabling network requests a compile time thing? What I'm doing is entirely at runtime.
network requests (specifically micropip) are done via the js module, so if the js module is gone that would break too
@oeway
> Have you checked `RistrictedPython` ? Its a pure python package and allows applying custom restrictions to the import function.
now i try it, it always throw `__import__ not found` like follow, i guess that the pyodide have a custom __import__ impl that different from normal CPython , so the RistrictedPython patch not work as expected.
```
Traceback (most recent call last):
File "<exec>", line 77, in importRestrictedPython
File "code.RPY", line 42, in <module>
ImportError: __import__ not found
```
---
Edit:
i fixed it now, it work well !!! thank all the people who was help me.
@hoodmane
> You can do that yourself by using Emscripten FS to delete the micropip package from the dummy file system. The filesystem is found either under pyodide.FS at start of load or under pyodide._module.FS once load is completed.
Hi, i find it in the path `pyodide._module.FS.root.contents.lib.contents['python3.8'].contents['site-packages'].contents` , can i direct delete it use `FS.rmdir('/lib/python3.8/site-packages/xxxxx')` ?
---
Edit:
seems like i can use `pyodide._module.FS.destroyNode(pyodide._module.FS.root.contents.lib.contents['python3.8'].contents['site-packages'].contents.numpy)` to delete a file dir
As a concrete example of my objection, I've added a JS2 package in my branch here: https://github.com/dalcde/pyodide/tree/js2 . It's source code is basically copied from `jsimport.c`, and it acts exactly the same as the existing `js`. If you build the branch, you can import `js2` and get the same functionality.
Now a python package is just a bunch of files in the virtual filesystem. Even if you hide the `js` package, a malicious user can still write the same bytes into the virtual fs using `open/write`, and now you have full javascript access. This will work as long as the relevant functions in hiwire and js2python exist and is compiled.
That said, I'm still a fan of #972 | 2021-01-16T18:29:54 |
pyodide/pyodide | 1,186 | pyodide__pyodide-1186 | [
"1162"
] | 1fb2df4e13876327b4a29a49ed527863b931d0ff | diff --git a/packages/micropip/micropip/micropip.py b/packages/micropip/micropip/micropip.py
--- a/packages/micropip/micropip/micropip.py
+++ b/packages/micropip/micropip/micropip.py
@@ -12,7 +12,10 @@ class js_pyodide: # type: ignore
class _module:
class packages:
- dependencies = [] # type: ignore
+ class dependencies:
+ @staticmethod
+ def object_entries():
+ return []
import hashlib
@@ -140,7 +143,9 @@ class _PackageManager:
def __init__(self):
self.builtin_packages = {}
- self.builtin_packages.update(js_pyodide._module.packages.dependencies)
+ self.builtin_packages.update(
+ js_pyodide._module.packages.dependencies.object_entries()
+ )
self.installed_packages = {}
def install(
diff --git a/src/pyodide-py/pyodide/_core.py b/src/pyodide-py/pyodide/_core.py
--- a/src/pyodide-py/pyodide/_core.py
+++ b/src/pyodide-py/pyodide/_core.py
@@ -2,7 +2,7 @@
import platform
if platform.system() == "Emscripten":
- from _pyodide_core import JsProxy, JsMethod, JsException, JsBuffer
+ from _pyodide_core import JsProxy, JsException, JsBuffer
else:
# Can add shims here if we are so inclined.
class JsException(Exception):
@@ -17,15 +17,10 @@ class JsProxy:
# Defined in jsproxy.c
- class JsMethod:
- """A proxy to make it possible to call Javascript bound methods from Python."""
-
- # Defined in jsproxy.c
-
class JsBuffer:
"""A proxy to make it possible to call Javascript typed arrays from Python."""
# Defined in jsproxy.c
-__all__ = [JsProxy, JsMethod, JsException]
+__all__ = [JsProxy, JsException]
| diff --git a/src/tests/test_jsproxy.py b/src/tests/test_jsproxy.py
--- a/src/tests/test_jsproxy.py
+++ b/src/tests/test_jsproxy.py
@@ -22,7 +22,6 @@ def test_jsproxy_dir(selenium):
"__defineGetter__",
"__defineSetter__",
"__delattr__",
- "__delitem__",
"constructor",
"toString",
"typeof",
@@ -88,28 +87,25 @@ class Point {
assert (
selenium.run(
"""
- from js import TEST
- del TEST.y
- hasattr(TEST, 'y')"""
+ from js import TEST
+ del TEST.y
+ hasattr(TEST, 'y')
+ """
)
is False
)
selenium.run_js(
"""
- class Point {
- constructor(x, y) {
- this.x = x;
- this.y = y;
- }
- }
- window.TEST = new Point(42, 43);"""
+ window.TEST = new Map([["x", 42], ["y", 43]]);
+ """
)
assert (
selenium.run(
"""
- from js import TEST
- del TEST['y']
- 'y' in TEST"""
+ from js import TEST
+ del TEST['y']
+ 'y' in TEST
+ """
)
is False
)
@@ -134,7 +130,7 @@ class Point {
selenium.run(
"""
from js import TEST
- dict(TEST) == {'foo': 'bar', 'baz': 'bap'}
+ dict(TEST.object_entries()) == {'foo': 'bar', 'baz': 'bap'}
"""
)
is True
@@ -440,12 +436,11 @@ def test_unregister_jsmodule(selenium):
pyodide.registerJsModule("a", a);
pyodide.registerJsModule("a", b);
pyodide.unregisterJsModule("a")
- pyodide.runPython(`
- try:
+ await pyodide.runPythonAsync(`
+ from unittest import TestCase
+ raises = TestCase().assertRaises
+ with raises(ImportError):
import a
- assert False
- except ImportError:
- pass
`)
"""
)
@@ -508,3 +503,214 @@ def test_register_jsmodule_docs_example(selenium):
assert c == 2
"""
)
+
+
+def test_mixins_feature_presence(selenium):
+ result = selenium.run_js(
+ """
+ let fields = [
+ [{ [Symbol.iterator](){} }, "__iter__"],
+ [{ next(){} }, "__next__", "__iter__"],
+ [{ length : 1 }, "__len__"],
+ [{ get(){} }, "__getitem__"],
+ [{ set(){} }, "__setitem__", "__delitem__"],
+ [{ has(){} }, "__contains__"],
+ [{ then(){} }, "__await__"]
+ ];
+
+ let test_object = pyodide.runPython(`
+ from js import console
+ def test_object(obj, keys_expected):
+ for [key, expected_val] in keys_expected.object_entries():
+ actual_val = hasattr(obj, key)
+ if actual_val != expected_val:
+ console.log(obj)
+ console.log(key)
+ console.log(actual_val)
+ assert False
+ test_object
+ `);
+
+ for(let flags = 0; flags < (1 << fields.length); flags ++){
+ let o = {};
+ let keys_expected = {};
+ for(let [idx, [obj, ...keys]] of fields.entries()){
+ if(flags & (1<<idx)){
+ Object.assign(o, obj);
+ }
+ for(let key of keys){
+ keys_expected[key] = keys_expected[key] || !!(flags & (1<<idx));
+ }
+ }
+ test_object(o, keys_expected);
+ }
+ """
+ )
+
+
+def test_mixins_calls(selenium):
+ result = selenium.run_js(
+ """
+ window.testObjects = {};
+ testObjects.iterable = { *[Symbol.iterator](){
+ yield 3; yield 5; yield 7;
+ } };
+ testObjects.iterator = testObjects.iterable[Symbol.iterator]();
+ testObjects.has_len1 = { length : 7, size : 10 };
+ testObjects.has_len2 = { length : 7 };
+ testObjects.has_get = { get(x){ return x; } };
+ testObjects.has_getset = new Map();
+ testObjects.has_has = { has(x){ return typeof(x) === "string" && x.startsWith("x") } };
+ testObjects.has_includes = { includes(x){ return typeof(x) === "string" && x.startsWith("a") } };
+ testObjects.has_has_includes = {
+ includes(x){ return typeof(x) === "string" && x.startsWith("a") },
+ has(x){ return typeof(x) === "string" && x.startsWith("x") }
+ };
+ testObjects.awaitable = { then(cb){ cb(7); } };
+
+ let result = await pyodide.runPythonAsync(`
+ from js import testObjects as obj
+ result = []
+ result.append(["iterable1", list(iter(obj.iterable)), [3, 5, 7]])
+ result.append(["iterable2", [*obj.iterable], [3, 5, 7]])
+ it = obj.iterator
+ result.append(["iterator", [next(it), next(it), next(it)], [3, 5, 7]])
+ result.append(["has_len1", len(obj.has_len1), 10])
+ result.append(["has_len2", len(obj.has_len2), 7])
+ result.append(["has_get1", obj.has_get[10], 10])
+ result.append(["has_get2", obj.has_get[11], 11])
+ m = obj.has_getset
+ m[1] = 6
+ m[2] = 77
+ m[3] = 9
+ m[2] = 5
+ del m[3]
+ result.append(["has_getset", [x.to_py() for x in m.entries()], [[1, 6], [2, 5]]])
+ result.append(["has_has", [n in obj.has_has for n in ["x9", "a9"]], [True, False]])
+ result.append(["has_includes", [n in obj.has_includes for n in ["x9", "a9"]], [False, True]])
+ result.append(["has_has_includes", [n in obj.has_has_includes for n in ["x9", "a9"]], [True, False]])
+ result.append(["awaitable", await obj.awaitable, 7])
+ result
+ `);
+ return result.toJs();
+ """
+ )
+ for [desc, a, b] in result:
+ assert a == b, desc
+
+
+def test_mixins_errors(selenium):
+ selenium.run_js(
+ """
+ window.a = [];
+ window.b = {
+ has(){ return false; },
+ get(){ return undefined; },
+ set(){ return false; },
+ delete(){ return false; },
+ };
+ await pyodide.runPythonAsync(`
+ from unittest import TestCase
+ raises = TestCase().assertRaises
+ from js import a, b
+ with raises(IndexError):
+ a[0]
+ with raises(IndexError):
+ del a[0]
+ with raises(KeyError):
+ b[0]
+ with raises(KeyError):
+ del b[0]
+ `);
+
+ window.c = {
+ next(){},
+ length : 1,
+ get(){},
+ set(){},
+ has(){},
+ then(){}
+ };
+ window.d = {
+ [Symbol.iterator](){},
+ };
+ pyodide.runPython("from js import c, d");
+ delete c.next;
+ delete c.length;
+ delete c.get;
+ delete c.set;
+ delete c.has;
+ delete c.then;
+ delete d[Symbol.iterator];
+ await pyodide.runPythonAsync(`
+ from contextlib import contextmanager
+ from unittest import TestCase
+ @contextmanager
+ def raises(exc, match=None):
+ with TestCase().assertRaisesRegex(exc, match) as e:
+ yield e
+
+ from pyodide import JsException
+ msg = "^TypeError:.* is not a function$"
+ with raises(JsException, match=msg):
+ next(c)
+ with raises(JsException, match=msg):
+ iter(d)
+ with raises(TypeError, match="object does not have a valid length"):
+ len(c)
+ with raises(JsException, match=msg):
+ c[0]
+ with raises(JsException, match=msg):
+ c[0] = 7
+ with raises(JsException, match=msg):
+ del c[0]
+ with raises(TypeError, match="can't be used in 'await' expression"):
+ await c
+ `);
+
+ window.l = [0, false, NaN, undefined, null];
+ window.l[6] = 7;
+ await pyodide.runPythonAsync(`
+ from unittest import TestCase
+ raises = TestCase().assertRaises
+ from js import l
+ with raises(IndexError):
+ l[10]
+ with raises(IndexError):
+ l[5]
+ assert len(l) == 7
+ l[0]; l[1]; l[2]; l[3]
+ l[4]; l[6]
+ del l[1]
+ with raises(IndexError):
+ l[4]
+ l[5]
+ del l[4]
+ l[3]; l[4]
+ `);
+
+ window.l = [0, false, NaN, undefined, null];
+ window.l[6] = 7;
+ let a = Array.from(window.l.entries());
+ console.log(a);
+ a.splice(5, 1);
+ window.m = new Map(a);
+ console.log(m.size);
+ await pyodide.runPythonAsync(`
+ from js import m
+ from unittest import TestCase
+ raises = TestCase().assertRaises
+ with raises(KeyError):
+ m[10]
+ with raises(KeyError):
+ m[5]
+ assert len(m) == 6
+ m[0]; m[1]; m[2]; m[3]
+ m[4]; m[6]
+ del m[1]
+ with raises(KeyError):
+ m[1]
+ assert len(m) == 5
+ `);
+ """
+ )
diff --git a/src/tests/test_pyodide.py b/src/tests/test_pyodide.py
--- a/src/tests/test_pyodide.py
+++ b/src/tests/test_pyodide.py
@@ -182,10 +182,12 @@ def test_hiwire_is_promise(selenium):
"new Map()",
"new Set()",
]:
- assert not selenium.run_js(f"return pyodide._module.hiwire.isPromise({s})")
+ assert selenium.run_js(
+ f"return pyodide._module.hiwire.isPromise({s}) === false;"
+ )
assert selenium.run_js(
- "return pyodide._module.hiwire.isPromise(Promise.resolve());"
+ "return pyodide._module.hiwire.isPromise(Promise.resolve()) === true;"
)
assert selenium.run_js(
diff --git a/src/tests/test_python.py b/src/tests/test_python.py
--- a/src/tests/test_python.py
+++ b/src/tests/test_python.py
@@ -158,12 +158,12 @@ def test_eval_nothing(selenium):
def test_unknown_attribute(selenium):
- selenium.run(
+ selenium.run_async(
"""
+ from unittest import TestCase
+ raises = TestCase().assertRaisesRegex
import js
- try:
+ with raises(AttributeError, "asdf"):
js.asdf
- except AttributeError as e:
- assert "asdf" in str(e)
"""
)
diff --git a/src/tests/test_webloop.py b/src/tests/test_webloop.py
--- a/src/tests/test_webloop.py
+++ b/src/tests/test_webloop.py
@@ -55,6 +55,8 @@ def test_capture_exception(selenium):
run_with_resolve(
selenium,
"""
+ from unittest import TestCase
+ raises = TestCase().assertRaises
from js import resolve
class MyException(Exception):
pass
@@ -62,12 +64,9 @@ async def foo(arg):
raise MyException('oops')
def capture_exception(fut):
- try:
+ with raises(MyException):
fut.result()
- except MyException:
- resolve()
- else:
- raise Exception("Expected fut.result() to raise MyException")
+ resolve()
import asyncio
fut = asyncio.ensure_future(foo(998))
fut.add_done_callback(capture_exception)
@@ -131,16 +130,15 @@ def test_asyncio_exception(selenium):
run_with_resolve(
selenium,
"""
+ from unittest import TestCase
+ raises = TestCase().assertRaises
from js import resolve
async def dummy_task():
raise ValueError("oops!")
async def capture_exception():
- try:
+ with raises(ValueError):
await dummy_task()
- except ValueError:
- resolve()
- else:
- raise Exception("Expected ValueError")
+ resolve()
import asyncio
asyncio.ensure_future(capture_exception())
""",
| Use proper duck typing for JsProxy
I think the idea of duck typing is to be able to implement functions like:
```python
def isaduck(obj):
return hasattr(obj, "quack") and iscallable(obj.quack)
```
If we implement quack on every `JsProxy` and just throw an error if we can't figure out how to quack:
```python
class JsProxy:
def quack(self):
if not hasattr(self.jsobject, "quack"):
raise TypeError("Underlying Js object is not a duck")
return self.jsobject.quack()
```
Now `isaduck(jsproxy)` _always_ returns `True`, but the `jsproxy` may be planning to raise a type error every time you call `jsproxy.quack()`. If we know that `quack()` has no side effects, then you can try:
```python
def isaduck(obj):
try:
obj.quack()
return True
except TypeError, NotImplementedError:
return False
```
but this is a dangerous assumption. Thus I view writing a wrapper class like this as a problematic antipattern. The point of this PR is to gradually move away from this antipattern towards conditionally implementing exactly the set of capabilities that the underlying javascript object can support.
Instead, I am trying to switch to:
```python
class JsProxyDuck(JsProxy):
def quack(self):
return self.jsobject.quack()
```
and then in the factory function `jsproxy_create` we decide whether to inherit from `JsProxyDuck`.
The already merged PRs #1124 and #1153 are both work in this direction.
| 2021-02-02T04:28:25 |
|
pyodide/pyodide | 1,215 | pyodide__pyodide-1215 | [
"1202"
] | b22b4f0c9e4d811a8c782fdb2694532d84f012a0 | diff --git a/docs/sphinx_pyodide/sphinx_pyodide/jsdoc.py b/docs/sphinx_pyodide/sphinx_pyodide/jsdoc.py
--- a/docs/sphinx_pyodide/sphinx_pyodide/jsdoc.py
+++ b/docs/sphinx_pyodide/sphinx_pyodide/jsdoc.py
@@ -4,6 +4,7 @@
from docutils.utils import new_document
from collections import OrderedDict
+import re
from sphinx import addnodes
from sphinx.util import rst
@@ -11,9 +12,13 @@
from sphinx.ext.autosummary import autosummary_table, extract_summary
from sphinx_js.jsdoc import Analyzer as JsAnalyzer
-from sphinx_js.ir import Function
+from sphinx_js.ir import Class, Function
from sphinx_js.parsers import path_and_formal_params, PathVisitor
-from sphinx_js.renderers import AutoFunctionRenderer, AutoAttributeRenderer
+from sphinx_js.renderers import (
+ AutoFunctionRenderer,
+ AutoAttributeRenderer,
+ AutoClassRenderer,
+)
class PyodideAnalyzer:
@@ -47,7 +52,12 @@ def get_object_from_json(self, json):
path components which JsAnalyzer.get_object requires.
"""
path = self.longname_to_path(json["longname"])
- kind = "function" if json["kind"] == "function" else "attribute"
+ if json["kind"] == "function":
+ kind = "function"
+ elif json["kind"] == "class":
+ kind = "class"
+ else:
+ kind = "attribute"
obj = self.inner.get_object(path, kind)
obj.kind = kind
return obj
@@ -58,12 +68,16 @@ def create_js_doclets(self):
"""
def get_val():
- return OrderedDict([["attribute", []], ["function", []]])
+ return OrderedDict([["attribute", []], ["function", []], ["class", []]])
self.js_docs = {key: get_val() for key in ["globals", "pyodide", "PyProxy"]}
items = {"PyProxy": []}
for (key, group) in self._doclets_by_class.items():
key = [x for x in key if "/" not in x]
+ if key[-1] == "PyBuffer":
+ # PyBuffer stuff is documented as a class. Would be nice to have
+ # a less ad hoc way to deal with this...
+ continue
if key[-1] == "globalThis":
items["globals"] = group
if key[0] == "pyodide." and key[-1] == "Module":
@@ -76,7 +90,13 @@ def get_val():
if json.get("access", None) == "private":
continue
obj = self.get_object_from_json(json)
+ if isinstance(obj, Class):
+ # sphinx-jsdoc messes up array types. Fix them.
+ for x in obj.members:
+ if hasattr(x, "type"):
+ x.type = re.sub("Array\.<([a-zA-Z_0-9]*)>", r"\1[]", x.type)
if obj.name[0] == '"' and obj.name[-1] == '"':
+ # sphinx-jsdoc messes up Symbol attributes. Fix them.
obj.name = "[" + obj.name[1:-1] + "]"
self.js_docs[key][obj.kind].append(obj)
@@ -97,11 +117,13 @@ def get_rst(self, obj):
JsDoc also has an AutoClassRenderer which may be useful in the future."""
if isinstance(obj, Function):
renderer = AutoFunctionRenderer
+ elif isinstance(obj, Class):
+ renderer = AutoClassRenderer
else:
renderer = AutoAttributeRenderer
- return renderer(self, app, arguments=["dummy"]).rst(
- [obj.name], obj, use_short_name=False
- )
+ return renderer(
+ self, app, arguments=["dummy"], options={"members": ["*"]}
+ ).rst([obj.name], obj, use_short_name=False)
def get_rst_for_group(self, objects):
return [self.get_rst(obj) for obj in objects]
@@ -144,6 +166,9 @@ def run(self):
for group_name, group_objects in value.items():
if not group_objects:
continue
+ if group_name == "class":
+ # Plural of class is "classes" not "classs"
+ group_name += "e"
result.append(self.format_heading(group_name.title() + "s:"))
table_items = self.get_summary_table(module, group_objects)
table_markup = self.format_table(table_items)
| diff --git a/packages/numpy/test_numpy.py b/packages/numpy/test_numpy.py
--- a/packages/numpy/test_numpy.py
+++ b/packages/numpy/test_numpy.py
@@ -1,3 +1,6 @@
+import pytest
+
+
def test_numpy(selenium):
selenium.load_package("numpy")
selenium.run("import numpy")
@@ -191,3 +194,107 @@ def test_runwebworker_numpy(selenium_standalone):
"""
)
assert output == "[0. 0. 0. 0. 0.]"
+
+
+def test_get_buffer(selenium):
+ selenium.run_js(
+ """
+ await pyodide.runPythonAsync(`
+ import numpy as np
+ x = np.arange(24)
+ z1 = x.reshape([8,3])
+ z2 = z1[-1::-1]
+ z3 = z1[::,-1::-1]
+ z4 = z1[-1::-1,-1::-1]
+ `);
+ for(let x of ["z1", "z2", "z3", "z4"]){
+ let z = pyodide.pyimport(x).getBuffer("u32");
+ for(let idx1 = 0; idx1 < 8; idx1++) {
+ for(let idx2 = 0; idx2 < 3; idx2++){
+ let v1 = z.data[z.offset + z.strides[0] * idx1 + z.strides[1] * idx2];
+ let v2 = pyodide.runPython(`repr(${x}[${idx1}, ${idx2}])`);
+ console.log(`${v1}, ${typeof(v1)}, ${v2}, ${typeof(v2)}, ${v1===v2}`);
+ if(v1.toString() !== v2){
+ throw new Error(`Discrepancy ${x}[${idx1}, ${idx2}]: ${v1} != ${v2}`);
+ }
+ }
+ }
+ z.release();
+ }
+ """
+ )
+
+
[email protected](
+ "arg",
+ [
+ "np.arange(6).reshape((2, -1))",
+ "np.arange(12).reshape((3, -1))[::2, ::2]",
+ "np.arange(12).reshape((3, -1))[::-1, ::-1]",
+ "np.arange(12).reshape((3, -1))[::, ::-1]",
+ "np.arange(12).reshape((3, -1))[::-1, ::]",
+ "np.arange(12).reshape((3, -1))[::-2, ::-2]",
+ "np.arange(6).reshape((2, -1)).astype(np.int8, order='C')",
+ "np.arange(6).reshape((2, -1)).astype(np.int8, order='F')",
+ "np.arange(6).reshape((2, -1, 1))",
+ "np.ones((1, 1))[0:0]", # shape[0] == 0
+ "np.ones(1)", # ndim == 0
+ ]
+ + [
+ f"np.arange(3).astype(np.{type_})"
+ for type_ in ["int8", "uint8", "int16", "int32", "float32", "float64"]
+ ],
+)
+def test_get_buffer_roundtrip(selenium, arg):
+ selenium.run_js(
+ f"""
+ await pyodide.runPythonAsync(`
+ import numpy as np
+ x = {arg}
+ `);
+ window.x_js_buf = pyodide.pyimport("x").getBuffer();
+ x_js_buf.length = x_js_buf.data.length;
+ """
+ )
+
+ selenium.run_js(
+ """
+ pyodide.runPython(`
+ import itertools
+ from unittest import TestCase
+ from js import x_js_buf
+ assert_equal = TestCase().assertEqual
+
+ assert_equal(x_js_buf.ndim, x.ndim)
+ assert_equal(x_js_buf.shape.to_py(), list(x.shape))
+ assert_equal(x_js_buf.strides.to_py(), [s/x.itemsize for s in x.data.strides])
+ assert_equal(x_js_buf.format, x.data.format)
+ if len(x) == 0:
+ assert x_js_buf.length == 0
+ else:
+ minoffset = 1000
+ maxoffset = 0
+ for tup in itertools.product(*[range(n) for n in x.shape]):
+ offset = x_js_buf.offset + sum(x*y for (x,y) in zip(tup, x_js_buf.strides))
+ minoffset = min(offset, minoffset)
+ maxoffset = max(offset, maxoffset)
+ assert_equal(x[tup], x_js_buf.data[offset])
+ assert_equal(minoffset, 0)
+ assert_equal(maxoffset + 1, x_js_buf.length)
+ x_js_buf.release()
+ `);
+ """
+ )
+
+
+def test_get_buffer_error_messages(selenium):
+ with pytest.raises(Exception, match="Javascript has no Float16Array"):
+ selenium.run_js(
+ """
+ await pyodide.runPythonAsync(`
+ import numpy as np
+ x = np.ones(2, dtype=np.float16)
+ `);
+ pyodide.pyimport("x").getBuffer();
+ """
+ )
diff --git a/src/tests/test_pyproxy.py b/src/tests/test_pyproxy.py
--- a/src/tests/test_pyproxy.py
+++ b/src/tests/test_pyproxy.py
@@ -198,6 +198,40 @@ def test():
assert result == result2
+def test_pyproxy_get_buffer(selenium):
+ selenium.run_js(
+ """
+ await pyodide.runPython(`
+ from sys import getrefcount
+ z1 = memoryview(bytes(range(24))).cast("b", [8,3])
+ z2 = z1[-1::-1]
+ `);
+ for(let x of ["z1", "z2"]){
+ pyodide.runPython(`assert getrefcount(${x}) == 2`);
+ let proxy = pyodide.globals.get(x);
+ pyodide.runPython(`assert getrefcount(${x}) == 3`);
+ let z = proxy.getBuffer();
+ pyodide.runPython(`assert getrefcount(${x}) == 4`);
+ proxy.destroy();
+ pyodide.runPython(`assert getrefcount(${x}) == 3`);
+ for(let idx1 = 0; idx1 < 8; idx1++) {
+ for(let idx2 = 0; idx2 < 3; idx2++){
+ let v1 = z.data[z.offset + z.strides[0] * idx1 + z.strides[1] * idx2];
+ let v2 = pyodide.runPython(`repr(${x}[${idx1}, ${idx2}])`);
+ console.log(`${v1}, ${typeof(v1)}, ${v2}, ${typeof(v2)}, ${v1===v2}`);
+ if(v1.toString() !== v2){
+ throw new Error(`Discrepancy ${x}[${idx1}, ${idx2}]: ${v1} != ${v2}`);
+ }
+ }
+ }
+ z.release();
+ pyodide.runPython(`assert getrefcount(${x}) == 2`);
+ pyodide.runPython(`del ${x}`);
+ }
+ """
+ )
+
+
def test_pyproxy_mixins(selenium):
result = selenium.run_js(
"""
| Performance issues with buffer conversions from python to javascript
Hello @hoodmane, thank you for the magnificent APIs (deep/shallowCopyToJavaScript) that would eliminate the heavy cost of big array conversion from python to Javascript. I think the shallowCopy version is designed for memory address reference instead of bulks of content copy. But I find the perfomance not so good as I've imagined, which is displayed as the following:

It takes about 1~7 seconds for shallowCopyToJavascript() to complete the memory address reference and maybe some necessary meta data copy I guess. However, it's not adequate for a realtime computation. Any suggestions for better conversion performance?
_Originally posted by @daoxian in https://github.com/iodide-project/pyodide/issues/1167#issuecomment-774488338_
| Hi @daoxian. Thanks for reporting this bad behavior.
The handling of memory buffers is currently not good, this is an issue we hope to fix at some point. I think the current behavior is to make a javascript `Array` of 1080 javascript `Array`s of 3-byte `Uint8Array`. The innermost layer is sliced out of the buffer memory so saying `arr[700][500][0] = 255` will adjust the original data, but populating those large javascript `Array`s is very inefficient at all (remember that an `Array` is a hashmap).
My guess is it would do much better if you transposed the array to have shape `(3, 1080, 1920)`, though of course this isn't very ergonomic if for instance you are intending to use this data to back an image.
Would you look at #1168? That issue is closely related to the poor performance here. If you have an opinion about that discussion it would be helpful. One thing that limits my ability to fix the Buffer handling code is that I don't really know what the use cases look like, so I really appreciate any input you have. Of course if you want to take a stab at improving the buffer conversion code, let me know and I can tell you what I think you'll need to know.
Indeed, it's much faster after the image transposed to (3, 1080, 1920).
see the actual case from #1168 and codes in #1203
@hoodmane
The `shallowCopyToJavascript` method seems only available in ES6 JS, not in nodejs.
In Node.js, either of the follows will trigger a runtime error:
> img = img.shallowCopyToJavascript(depth=1);
> img = img.shallowCopyToJavascript(1);
` JsException: ReferenceError: depth is not defined`
And after I have a look at the code, I can't found the definition of variable `depth`:
> shallowCopyToJavascript : function(){
> let idresult = _python2js_with_depth(_getPtr(this), depth);
> let result = Module.hiwire.get_value(idresult);
> Module.hiwire.decref(idresult);
> return result;
> },
Maybe a param `depth` should be there like function(depth) ?
I've tested it successfully in local env. I'll open a PR and fix it.
However, it takes around 30~40ms for shallowCopyToJavascript to convert a image of size (287, 287, 3), a little too long for the realtime computation. I hope the conversion be accomplished in no more than 10ms.
> However, it takes around 30~40ms for shallowCopyToJavascript to convert a image of size (287, 287, 3)
Again, I'm hoping to fix this soon, there is no technical reason that it can't be much faster. | 2021-02-08T06:51:01 |
pyodide/pyodide | 1,231 | pyodide__pyodide-1231 | [
"1208"
] | f64bee40d7db5835c78976d090569502eacde9cf | diff --git a/conftest.py b/conftest.py
--- a/conftest.py
+++ b/conftest.py
@@ -98,8 +98,8 @@ def run(self, code):
return self.run_js(
f"""
let result = pyodide.runPython({code!r});
- if(result && result.deepCopyToJavascript){{
- let converted_result = result.deepCopyToJavascript();
+ if(result && result.toJs){{
+ let converted_result = result.toJs();
result.destroy();
return converted_result;
}}
@@ -111,8 +111,8 @@ def run_async(self, code):
return self.run_js(
f"""
let result = await pyodide.runPythonAsync({code!r});
- if(result && result.deepCopyToJavascript){{
- let converted_result = result.deepCopyToJavascript();
+ if(result && result.toJs){{
+ let converted_result = result.toJs();
result.destroy();
return converted_result;
}}
| diff --git a/packages/numpy/test_numpy.py b/packages/numpy/test_numpy.py
--- a/packages/numpy/test_numpy.py
+++ b/packages/numpy/test_numpy.py
@@ -2,16 +2,12 @@ def test_numpy(selenium):
selenium.load_package("numpy")
selenium.run("import numpy")
selenium.run("x = numpy.ones((32, 64))")
- assert selenium.run_js(
- "return pyodide.pyimport('x').deepCopyToJavascript().length == 32"
- )
+ assert selenium.run_js("return pyodide.pyimport('x').toJs().length == 32")
for i in range(32):
- assert selenium.run_js(
- f"return pyodide.pyimport('x').deepCopyToJavascript()[{i}].length == 64"
- )
+ assert selenium.run_js(f"return pyodide.pyimport('x').toJs()[{i}].length == 64")
for j in range(64):
assert selenium.run_js(
- f"return pyodide.pyimport('x').deepCopyToJavascript()[{i}][{j}] == 1"
+ f"return pyodide.pyimport('x').toJs()[{i}][{j}] == 1"
)
@@ -55,7 +51,7 @@ def assert_equal():
for k in range(2):
assert (
selenium.run_js(
- f"return pyodide.pyimport('x').deepCopyToJavascript()[{i}][{j}][{k}]"
+ f"return pyodide.pyimport('x').toJs()[{i}][{j}][{k}]"
)
== expected_result[i][j][k]
)
@@ -82,7 +78,7 @@ def assert_equal():
)
assert_equal()
classname = selenium.run_js(
- "return pyodide.pyimport('x').deepCopyToJavascript()[0][0].constructor.name"
+ "return pyodide.pyimport('x').toJs()[0][0].constructor.name"
)
if order == "C" and dtype not in ("uint64", "int64"):
# Here we expect a TypedArray subclass, such as Uint8Array, but
@@ -98,7 +94,7 @@ def assert_equal():
)
assert_equal()
classname = selenium.run_js(
- "return pyodide.pyimport('x').deepCopyToJavascript()[0][0].constructor.name"
+ "return pyodide.pyimport('x').toJs()[0][0].constructor.name"
)
if order == "C" and dtype in ("int8", "uint8"):
# Here we expect a TypedArray subclass, such as Uint8Array, but
@@ -112,18 +108,9 @@ def assert_equal():
assert selenium.run("np.array([True, False])") == [True, False]
selenium.run("x = np.array([['string1', 'string2'], ['string3', 'string4']])")
- assert (
- selenium.run_js("return pyodide.pyimport('x').deepCopyToJavascript().length")
- == 2
- )
- assert (
- selenium.run_js("return pyodide.pyimport('x').deepCopyToJavascript()[0][0]")
- == "string1"
- )
- assert (
- selenium.run_js("return pyodide.pyimport('x').deepCopyToJavascript()[1][1]")
- == "string4"
- )
+ assert selenium.run_js("return pyodide.pyimport('x').toJs().length") == 2
+ assert selenium.run_js("return pyodide.pyimport('x').toJs()[0][0]") == "string1"
+ assert selenium.run_js("return pyodide.pyimport('x').toJs()[1][1]") == "string4"
def test_py2js_buffer_clear_error_flag(selenium):
@@ -194,7 +181,7 @@ def test_runpythonasync_numpy(selenium_standalone):
)
for i in range(5):
assert selenium_standalone.run_js(
- f"return pyodide.pyimport('x').deepCopyToJavascript()[{i}] == 0"
+ f"return pyodide.pyimport('x').toJs()[{i}] == 0"
)
diff --git a/src/tests/test_jsproxy.py b/src/tests/test_jsproxy.py
--- a/src/tests/test_jsproxy.py
+++ b/src/tests/test_jsproxy.py
@@ -12,7 +12,7 @@ def test_jsproxy_dir(selenium):
from js import a
from js import b
[dir(a), dir(b)]
- `).deepCopyToJavascript();
+ `).toJs();
"""
)
jsproxy_items = set(
@@ -49,7 +49,7 @@ def test_jsproxy_getattr(selenium):
return pyodide.runPython(`
from js import a
[ a.x, a.y, a.typeof ]
- `).deepCopyToJavascript();
+ `).toJs();
"""
)
== [2, "9", "object"]
@@ -195,7 +195,7 @@ def test_jsproxy_call(selenium):
from js import f
[f(*range(n)) for n in range(10)]
`
- ).deepCopyToJavascript();
+ ).toJs();
"""
)
== list(range(10))
@@ -379,7 +379,7 @@ def test_mount_object(selenium):
import b
result += [a.s, dir(a), dir(b)]
result
- `).deepCopyToJavascript()
+ `).toJs()
"""
)
assert result[:3] == ["x1", "x2", 3]
diff --git a/src/tests/test_pyproxy.py b/src/tests/test_pyproxy.py
--- a/src/tests/test_pyproxy.py
+++ b/src/tests/test_pyproxy.py
@@ -55,8 +55,7 @@ def get_value(self, value):
"apply",
"destroy",
"$$",
- "deepCopyToJavascript",
- "shallowCopyToJavascript",
+ "toJs",
]
)
assert selenium.run("hasattr(f, 'baz')")
diff --git a/src/tests/test_typeconversions.py b/src/tests/test_typeconversions.py
--- a/src/tests/test_typeconversions.py
+++ b/src/tests/test_typeconversions.py
@@ -22,14 +22,14 @@ def test_python2js(selenium):
)
assert selenium.run_js(
"""
- let x = pyodide.runPython("[1, 2, 3]").deepCopyToJavascript();
+ let x = pyodide.runPython("[1, 2, 3]").toJs();
return ((x instanceof window.Array) && (x.length === 3) &&
(x[0] == 1) && (x[1] == 2) && (x[2] == 3))
"""
)
assert selenium.run_js(
"""
- let x = pyodide.runPython("{42: 64}").deepCopyToJavascript();
+ let x = pyodide.runPython("{42: 64}").toJs();
return (typeof x === "object") && (x[42] === 64)
"""
)
@@ -236,7 +236,7 @@ def test_recursive_list_to_js(selenium_standalone):
x.append(x)
"""
)
- selenium_standalone.run_js("x = pyodide.pyimport('x').deepCopyToJavascript();")
+ selenium_standalone.run_js("x = pyodide.pyimport('x').toJs();")
def test_recursive_dict_to_js(selenium_standalone):
@@ -246,7 +246,7 @@ def test_recursive_dict_to_js(selenium_standalone):
x[0] = x
"""
)
- selenium_standalone.run_js("x = pyodide.pyimport('x').deepCopyToJavascript();")
+ selenium_standalone.run_js("x = pyodide.pyimport('x').toJs();")
def test_list_js2py2js(selenium):
| Minor fix: add parameter for shallowCopyToJavascript
Follow #1202
| Thanks @daoxian. I'm thinking about removing this API entirely since this makes it the same as `deepCopyToJavascript` but with a different default.
> I'm thinking about removing this API entirely since this makes it the same as `deepCopyToJavascript` but with a different default.
Will this be part of #1192, with [`Pyproxy.intoJs()` with a deep-parameter](https://github.com/iodide-project/pyodide/issues/1192#issuecomment-773536247)?
@phorward Exactly, yes. | 2021-02-11T02:58:55 |
pyodide/pyodide | 1,306 | pyodide__pyodide-1306 | [
"1006"
] | 47ea3d1a685cd98ed8f68b98ca2f7ff6c542aeca | diff --git a/conftest.py b/conftest.py
--- a/conftest.py
+++ b/conftest.py
@@ -318,7 +318,7 @@ def get_driver(self):
options = Options()
options.add_argument("--headless")
options.add_argument("--no-sandbox")
-
+ options.add_argument("--js-flags=--expose-gc")
return Chrome(options=options)
| diff --git a/src/tests/test_pyodide.py b/src/tests/test_pyodide.py
--- a/src/tests/test_pyodide.py
+++ b/src/tests/test_pyodide.py
@@ -366,7 +366,7 @@ def __del__(self):
assert sys.getrefcount(f) == 3
assert testCallListener() == 7
assert sys.getrefcount(f) == 3
- assert testRemoveListener(f)
+ assert testRemoveListener(proxy)
assert sys.getrefcount(f) == 3
proxy.destroy()
assert sys.getrefcount(f) == 2
diff --git a/src/tests/test_pyproxy.py b/src/tests/test_pyproxy.py
--- a/src/tests/test_pyproxy.py
+++ b/src/tests/test_pyproxy.py
@@ -95,8 +95,8 @@ def pyfunc(*args, **kwargs):
result.push([getRefCount(), 3])
pyodide.runPython(`
- window.jsfunc(pyfunc) # re-used existing PyProxy
- window.jsfunc(pyfunc) # re-used existing PyProxy
+ window.jsfunc(pyfunc) # create new PyProxy
+ window.jsfunc(pyfunc) # create new PyProxy
`)
// the refcount should be 3 because:
@@ -104,7 +104,7 @@ def pyfunc(*args, **kwargs):
// 1. pyfunc exists
// 2. one reference from PyProxy to pyfunc is alive
// 3. pyfunc is referenced from the sys.getrefcount()-test
- result.push([getRefCount(), 3]);
+ result.push([getRefCount(), 5]);
return result;
"""
)
@@ -461,6 +461,142 @@ def __len__(self):
)
+def test_pyproxy_gc(selenium):
+ if selenium.browser != "chrome":
+ pytest.skip("No gc exposed")
+
+ # Two ways to trigger garbage collection in Chrome:
+ # 1. options.add_argument("--js-flags=--expose-gc") in conftest, and use
+ # gc() in javascript.
+ # 2. selenium.driver.execute_cdp_cmd("HeapProfiler.collectGarbage", {})
+ #
+ # Unclear how to trigger gc in Firefox. Possible to do this by navigating to
+ # "about:memory" and then triggering a button press to the "GC" button, but
+ # that seems too annoying.
+
+ selenium.run_js(
+ """
+ window.x = new FinalizationRegistry((val) => { window.val = val; });
+ x.register({}, 77);
+ gc();
+ """
+ )
+ assert selenium.run_js("return window.val;") == 77
+
+ selenium.run_js(
+ """
+ window.res = new Map();
+
+ let d = pyodide.runPython(`
+ from js import res
+ def get_ref_count(x):
+ res[x] = sys.getrefcount(d)
+ return res[x]
+
+ import sys
+ class Test:
+ def __del__(self):
+ res["destructor_ran"] = True
+
+ def get(self):
+ return 7
+
+ d = Test()
+ get_ref_count(0)
+ d
+ `);
+ let get_ref_count = pyodide.globals.get("get_ref_count");
+ get_ref_count(1);
+ d.get();
+ get_ref_count(2);
+ d.get();
+ """
+ )
+ selenium.driver.execute_cdp_cmd("HeapProfiler.collectGarbage", {})
+
+ selenium.run(
+ """
+ get_ref_count(3)
+ del d
+ """
+ )
+ selenium.driver.execute_cdp_cmd("HeapProfiler.collectGarbage", {})
+ a = selenium.run_js("return Array.from(res.entries());")
+ assert dict(a) == {0: 2, 1: 3, 2: 4, 3: 2, "destructor_ran": True}
+
+
+def test_pyproxy_gc_destroy(selenium):
+ if selenium.browser != "chrome":
+ pytest.skip("No gc exposed")
+
+ selenium.run_js(
+ """
+ window.res = new Map();
+ let d = pyodide.runPython(`
+ from js import res
+ def get_ref_count(x):
+ res[x] = sys.getrefcount(d)
+ return res[x]
+ import sys
+ class Test:
+ def __del__(self):
+ res["destructor_ran"] = True
+
+ def get(self):
+ return 7
+
+ d = Test()
+ get_ref_count(0)
+ d
+ `);
+ let get_ref_count = pyodide.globals.get("get_ref_count");
+ get_ref_count(1);
+ d.get();
+ get_ref_count(2);
+ d.get();
+ get_ref_count(3);
+ d.destroy();
+ get_ref_count(4);
+ gc();
+ get_ref_count(5);
+ """
+ )
+ selenium.driver.execute_cdp_cmd("HeapProfiler.collectGarbage", {})
+ selenium.run(
+ """
+ get_ref_count(6)
+ del d
+ """
+ )
+ a = selenium.run_js("return Array.from(res.entries());")
+ assert dict(a) == {
+ 0: 2,
+ 1: 3,
+ 2: 4,
+ 3: 5,
+ 4: 4,
+ 5: 4,
+ 6: 2,
+ "destructor_ran": True,
+ }
+
+
+def test_pyproxy_copy(selenium):
+ result = selenium.run_js(
+ """
+ let result = [];
+ let a = pyodide.runPython(`d = { 1 : 2}; d`);
+ let b = pyodide.runPython(`d`);
+ result.push(a.get(1));
+ a.destroy();
+ result.push(b.get(1));
+ return result;
+ """
+ )
+ assert result[0] == 2
+ assert result[1] == 2
+
+
def test_errors(selenium):
selenium.run_js(
"""
diff --git a/src/tests/test_python.py b/src/tests/test_python.py
--- a/src/tests/test_python.py
+++ b/src/tests/test_python.py
@@ -42,14 +42,6 @@ def test_globals_get_multiple(selenium):
selenium.run_js("pyodide.globals.get('v')")
-def test_globals_get_same(selenium):
- """See #382"""
- selenium.run("def func(): return 42")
- assert selenium.run_js(
- "return pyodide.globals.get('func') == pyodide.globals.get('func')"
- )
-
-
def test_open_url(selenium, httpserver):
httpserver.expect_request("/data").respond_with_data(
b"HELLO", content_type="text/text", headers={"Access-Control-Allow-Origin": "*"}
| Use new Javascript GC apis to leak less memory
@phorward has discussed here #693 the fact that we leak lots of memory. With `FinalizationRegistry` and `WeakRef` we can do better. We must still leak general Python ==> javascript ==> Python reference loops (I believe I can prove that it is theoretically impossible to detect these with the available APIs unless we are allowed to only use javascript objects produced by pyodide), but there are two cases that we can handle:
1. the case when no reference loops cross the javascript / python border, and
2. the case when the js objects involved in the reference loop are only referenced in ways that the Python GC knows about (so the javascript objects are owned by Python).
Case 1 is simple, case 2 is complicated (we'd have to implement a mark and sweep algorithm) and definitely not worth doing except for fun.
One issue that would come up with testing these things is that Javascript gives no way to ask for the garbage collector to run. So there's no way to get a deterministic expected behavior -- how do you test "Javascript will eventually do this"?
| Yes, utilizing `FinalizationRegistry` would be good to automate reference count decrementing of Python created objects when they go out of scope on the JS side. Right now, there is the manual approach to do this using the [`.destroy()`](https://pyodide.readthedocs.io/en/latest/type_conversions.html?highlight=.destroy()#python-from-javascript) method from the JS side, which was necessitated before the availability of `FinalizationRegistry` . Of course, the manual `.destroy()` method doesn't solve the issue raised in #693, but using `FinalizationRegistry` should allow us to fix this. However, to make this work correctly, we'll need to get the reference counting correct for Python objects used on the javsacript side. If the same python is sent to javascript multiple times, its reference count in python needs to be increased each time. This is currently not handled correctly (I initially thought this was due to #708, but the same thing happens in pyodide v15, which was before #708). The following example has an error when the first reference to the Python object is destroyed while the second reference is still in scope. This will need to be fixed for `FinalizationRegistry` to work properly otherwise Python may garbage collect objects that are still in scope on the JS side like in the example below.
```html
<!DOCTYPE html>
<html>
<head>
<script type="text/javascript">
// set the pyodide files URL (packages.json, pyodide.asm.data etc)
window.languagePluginUrl = 'https://cdn.jsdelivr.net/pyodide/v0.16.1/full/';
</script>
<script src="https://cdn.jsdelivr.net/pyodide/v0.16.1/full/pyodide.js"></script>
</head>
<body>
Pyodide test page <br>
Open your browser console to see pyodide output
<script type="text/javascript">
languagePluginLoader.then(function () {
ref1 = pyodide.runPython('a=10');
ref2 = pyodide.runPython('a');
console.log('first try:', ref2);
// manually decrement the python reference count
// this was the only option before FinalizationRegistry was available
// ref2 should still be enough to keep the python object around
ref1.destroy();
// force python garbage collection
pyodide.runPython(`
import gc
gc.collect()
`);
console.log('second try:', ref2); // results in error: TypeError: ref1 is undefined
});
</script>
</body>
</html>
```
@mgreminger Thanks for pointing out that bug. I think that we should probably do actual refcounting rather than the current approach which is a bit naive. On a related note, I also want to add `WeakPyProxy` and `WeakJsProxy` APIs.
I don't understand this code though:
```javascript
ref1 = pyodide.runPython('a=10');
ref2 = pyodide.runPython('a');
// ...
ref1.destroy();
```
The first call returns `None` and py2js converts `None` ==> `undefined` so `ref1` should be `undefined`. The second call returns `10` (an `int`) and `py2js` converts `int` ==> `number` so `ref2` should javascript `10`. Then `ref1.destroy();` will throw a reference error.
It's clear though how to make a correct example that probably does what you claim.
Thanks @hoodmane for the correction. Yes, you're absolutely correct, I was trying to destroy undefined which was causing the error. All of this only applies to custom Python objects anyway so I shouldn't have use an integer as an example either. The following code actually correctly illustrates the issue using a function as the python object:
```html
<!DOCTYPE html>
<html>
<head>
<script type="text/javascript">
// set the pyodide files URL (packages.json, pyodide.asm.data etc)
window.languagePluginUrl = 'https://cdn.jsdelivr.net/pyodide/v0.16.1/full/';
</script>
<script src="https://cdn.jsdelivr.net/pyodide/v0.16.1/full/pyodide.js"></script>
</head>
<body>
Pyodide test page <br>
Open your browser console to see pyodide output
<script type="text/javascript">
languagePluginLoader.then(function () {
ref1 = pyodide.runPython('a=lambda x: 2*x; a');
ref2 = pyodide.runPython('a');
console.log('first try:', ref2(10)); // prints 20
// manually decrement the python reference count
// this was the only option before FinalizationRegistry was available
// ref2 should still be enough to keep the python object around
ref1.destroy();
// force python garbage collection
pyodide.runPython(`
import gc
gc.collect()
`);
console.log('second try:', ref2(20)); // results in error: Object has already been destroyed
});
</script>
</body>
</html>
``` | 2021-03-05T19:07:32 |
pyodide/pyodide | 1,322 | pyodide__pyodide-1322 | [
"1203",
"1221"
] | 4c1024545f3d4f0b8ab4974f839e23b1deafd497 | diff --git a/packages/micropip/micropip/micropip.py b/packages/micropip/micropip/micropip.py
--- a/packages/micropip/micropip/micropip.py
+++ b/packages/micropip/micropip/micropip.py
@@ -33,7 +33,7 @@ async def _get_url(url):
raise OSError(
f"Request for {url} failed with status {resp.status}: {resp.statusText}"
)
- return io.BytesIO(await resp.arrayBuffer())
+ return io.BytesIO((await resp.arrayBuffer()).to_py())
else:
diff --git a/src/pyodide-py/_pyodide/_core.py b/src/pyodide-py/_pyodide/_core.py
--- a/src/pyodide-py/_pyodide/_core.py
+++ b/src/pyodide-py/_pyodide/_core.py
@@ -82,6 +82,25 @@ def finally_(self, onfinally: Callable) -> "Promise":
this is needed because ``finally`` is a reserved keyword in Python.
"""
+ # There are no types for buffers:
+ # https://github.com/python/typing/issues/593
+ # https://bugs.python.org/issue27501
+ # This is just for docs so lets just make something up?
+
+ def assign(self, rhs: "ReadBuffer"):
+ """Assign from a Python buffer into the Javascript buffer.
+
+ Present only if the wrapped Javascript object is an ArrayBuffer or
+ an ArrayBuffer view.
+ """
+
+ def assign_to(self, to: "ReadWriteBuffer"):
+ """Assign to a Python buffer from the Javascript buffer.
+
+ Present only if the wrapped Javascript object is an ArrayBuffer or
+ an ArrayBuffer view.
+ """
+
# from pyproxy.c
def create_once_callable(obj: Callable) -> JsProxy:
| diff --git a/packages/numpy/test_numpy.py b/packages/numpy/test_numpy.py
--- a/packages/numpy/test_numpy.py
+++ b/packages/numpy/test_numpy.py
@@ -34,7 +34,7 @@ def test_typed_arrays(selenium):
selenium.run_js(f"window.array = new {jstype}([1, 2, 3, 4]);\n")
assert selenium.run(
"from js import array\n"
- "npyarray = numpy.asarray(array)\n"
+ "npyarray = numpy.asarray(array.to_py())\n"
f'npyarray.dtype.name == "{npytype}" '
"and npyarray == [1, 2, 3, 4]"
)
diff --git a/src/tests/test_jsproxy.py b/src/tests/test_jsproxy.py
--- a/src/tests/test_jsproxy.py
+++ b/src/tests/test_jsproxy.py
@@ -699,6 +699,77 @@ def raises(exc, match=None):
)
+def test_buffer(selenium):
+ selenium.run_js(
+ """
+ self.a = new Uint32Array(Array.from({length : 10}, (_,idx) => idx));
+ pyodide.runPython(`
+ from js import a
+ b = a.to_py()
+ b[4] = 7
+ assert b[8] == 8
+ a.assign_to(b)
+ assert b[4] == 4
+ b[4] = 7
+ a.assign(b)
+ assert a[4] == 7
+ `);
+ if(a[4] !== 7){
+ throw Error();
+ }
+ """
+ )
+ selenium.run_js(
+ """
+ self.a = new Uint32Array(Array.from({length : 10}, (_,idx) => idx));
+ pyodide.runPython(`
+ import js
+ from unittest import TestCase
+ raises = TestCase().assertRaisesRegex
+ from array import array
+ from js import a
+ c = array('b', range(30))
+ d = array('b', range(40))
+ with raises(ValueError, "cannot assign to TypedArray"):
+ a.assign(c)
+
+ with raises(ValueError, "cannot assign from TypedArray"):
+ a.assign_to(c)
+
+ with raises(ValueError, "incompatible formats"):
+ a.assign(d)
+
+ with raises(ValueError, "incompatible formats"):
+ a.assign_to(d)
+
+ e = array('I', range(10, 20))
+ a.assign(e)
+ `);
+ for(let [k, v] of a.entries()){
+ if(v !== k + 10){
+ throw new Error([v, k]);
+ }
+ }
+ """
+ )
+
+
+def test_buffer_assign_back(selenium):
+ result = selenium.run_js(
+ """
+ self.jsarray = new Uint8Array([1,2,3, 4, 5, 6]);
+ pyodide.runPython(`
+ from js import jsarray
+ array = jsarray.to_py()
+ array[1::2] = bytes([20, 77, 9])
+ jsarray.assign(array)
+ `);
+ return Array.from(jsarray)
+ """
+ )
+ assert result == [1, 20, 3, 77, 5, 9]
+
+
def test_memory_leaks(selenium):
# refcounts are tested automatically in conftest by default
selenium.run_js(
diff --git a/src/tests/test_typeconversions.py b/src/tests/test_typeconversions.py
--- a/src/tests/test_typeconversions.py
+++ b/src/tests/test_typeconversions.py
@@ -384,15 +384,17 @@ def test_js2python(selenium):
assert selenium.run("t.jspython is open")
assert selenium.run(
"""
- ((t.jsbytes.tolist() == [1, 2, 3])
- and (t.jsbytes.tobytes() == b"\x01\x02\x03"))
+ jsbytes = t.jsbytes.to_py()
+ ((jsbytes.tolist() == [1, 2, 3])
+ and (jsbytes.tobytes() == b"\x01\x02\x03"))
"""
)
assert selenium.run(
"""
+ jsfloats = t.jsfloats.to_py()
import struct
expected = struct.pack("fff", 1, 2, 3)
- (t.jsfloats.tolist() == [1, 2, 3]) and (t.jsfloats.tobytes() == expected)
+ (jsfloats.tolist() == [1, 2, 3]) and (jsfloats.tobytes() == expected)
"""
)
assert selenium.run('str(t.jsobject) == "[object XMLHttpRequest]"')
@@ -422,7 +424,6 @@ def test_js2python_bool(selenium):
)
[email protected]("wasm_heap", (False, True))
@pytest.mark.parametrize(
"jstype, pytype",
(
@@ -437,43 +438,35 @@ def test_js2python_bool(selenium):
("Float64Array", "d"),
),
)
-def test_typed_arrays(selenium, wasm_heap, jstype, pytype):
- if not wasm_heap:
- selenium.run_js(f"window.array = new {jstype}([1, 2, 3, 4]);\n")
- else:
- selenium.run_js(
- f"""
- let buffer = pyodide._module._malloc(
- 4 * {jstype}.BYTES_PER_ELEMENT);
- window.array = new {jstype}(
- pyodide._module.HEAPU8.buffer, buffer, 4);
- window.array[0] = 1;
- window.array[1] = 2;
- window.array[2] = 3;
- window.array[3] = 4;
- """
- )
- assert selenium.run(
+def test_typed_arrays(selenium, jstype, pytype):
+ assert selenium.run_js(
f"""
- from js import array
- import struct
- expected = struct.pack("{pytype*4}", 1, 2, 3, 4)
- print(array.format, array.tolist(), array.tobytes())
- ((array.format == "{pytype}")
- and array.tolist() == [1, 2, 3, 4]
- and array.tobytes() == expected
- and array.obj._has_bytes() is {not wasm_heap})
- """
+ window.array = new {jstype}([1, 2, 3, 4]);
+ return pyodide.runPython(`
+ from js import array
+ array = array.to_py()
+ import struct
+ expected = struct.pack("{pytype*4}", 1, 2, 3, 4)
+ print(array.format, array.tolist(), array.tobytes())
+ # Result:
+ ((array.format == "{pytype}")
+ and array.tolist() == [1, 2, 3, 4]
+ and array.tobytes() == expected)
+ `);
+ """
)
def test_array_buffer(selenium):
- selenium.run_js("window.array = new ArrayBuffer(100);\n")
assert (
- selenium.run(
+ selenium.run_js(
"""
- from js import array
- len(array.tobytes())
+ window.array = new ArrayBuffer(100);
+ return pyodide.runPython(`
+ from js import array
+ array = array.to_py()
+ len(array.tobytes())
+ `);
"""
)
== 100
| Performance issues in buffer conversions from javascript to python
since in my case from time to time in python I have to aquire arrays returned from JS methods, i.e., a huge image array returned by some method of opencv.js which is now lacked in python packages. The cost of automatic copying the array data from JavaScript to Python is too much, and can't be beared in some realtime computation cases.
_Originally posted by @daoxian in https://github.com/iodide-project/pyodide/issues/1192#issuecomment-774496411_
Soundness issues in buffer conversions code
Currently if one converts a 1d ndarray from Python ==> javascript, `_python2js_shareable_buffer_recursive` produces a `TypedArray` object that shares memory with the `ndarray` but without associating the lifetime of the `TypedArray` with the backing memory in any way, so the memory is borrowed. If the Python object is later garbage collected, then the memory backing this `TypedArray` will be freed, but without any warning to someone who stored the `TypedArray`. The memory may then be reallocated to something else, potentially leading to memory corruption. If the `TypedArray` is reimported into Python it yields a Python `memoryview` which also points to the same borrowed memory.
I think the only reason this doesn't happen more often is that we leak a lot of memory and people don't use the buffer conversions that much.
I think in the case that the user attempts to import a `TypedArray` on the Wasm heap into python, we should actually throw an error because that memory has been malloc'd and it may eventually be freed but we have no way of knowing when it is freed. I think we need some extra Javascript type which is a `TypedArray` referencing owned memory with a `release` method to let go of the memory when it is done. If the memory has a static lifetime then the `release` method can be a no-op. Then this pair `(TypedArray, release)` can be safely imported into Python as a Buffer with a `bf_release` implementation that calls the `release` function.
This obviously will require a more carefully designed API, but the current system simply does not work.
See PR #1215 which partially addresses this. @daoxian
| @daoxian If you wouldn't you mind posting a small code example, it would be helpful.
> @daoxian If you wouldn't you mind posting a small code example, it would be helpful.
Sure.
In python, I want to resize a image in opencv. Currently no opencv present in pyodide packages , so I have to call opencv.js which residents in JS:
> print(im_patch.size, im_patch.shape) # m_patch is an numpy ndarray stands for an image
> im_patch_bytes = im_patch.tobytes() # conver to bytes in order to avoid mem copy when converting to JS arg of resize_img()
> scaled_img_raw = js.g_tracker.resize_img(im_patch_bytes, (im_patch.shape[0], im_patch.shape[1]), (model_sz, model_sz))
> im_patch = np.asarray(scaled_img_raw, dtype=np.uint8).reshape(model_sz, model_sz, 3)
In JavaScript class method:
> resize_img (img_bytes /* auto converted to Uint8ClampedArray already */, from_shape, to_shape) {
> const img=cv.matFromArray(from_shape[0], from_shape[1], cv.CV_8UC3, img_bytes);
> const scaledImg = new cv.Mat();
> cv.resize(img, scaledImg, new cv.Size(to_shape[1], to_shape[0]), 0, 0, cv.INTER_NEAREST);
> return new Uint8ClampedArray(scaledImg.data);
> }
As to why I don't use Pillow to resize an image is due to its poor performance.
The case is that I'll migrate an `Object Tracking Algorithm` written in python to JavaScript so as to run it in the browser. It's a realtime tracking algorithm. With the input of a streaming of images, it should process every image and run as fast as possible.
Thanks! This is a very interesting example.
In your particular use case there is a way to fix the performance. I think roughly the following should work:
```javascript
function alloc_matrix_3d(shape0, shape1, shape2){
let num_bytes = shape0*shape1*shape2;
let ptr = pyodide._module._PyMem_Malloc(num_bytes);
let bytes = pyodie._module.HEAP8.subarray(ptr, ptr + num_bytes);
return [ptr, cv.matFromArray(from_shape[0], from_shape[1], cv.CV_8UC3, bytes)];
}
function free_ptr(ptr){
pyodide._module._PyMem_Free(ptr);
}
function resize_img (img_bytes /* auto converted to Uint8ClampedArray already */, from_shape, to_shape) {
const img=cv.matFromArray(from_shape[0], from_shape[1], cv.CV_8UC3, img_bytes);
// Somehow you need to remember to free_ptr later, otherwise you'll leak this memory
const [ptr, scaledImg] = alloc_matrix_3d(to_shape[1], to_shape[0], cv.CV_8UC3);
cv.resize(img, scaledImg, new cv.Size(to_shape[1], to_shape[0]), 0, 0, cv.INTER_NEAREST);
return new Uint8ClampedArray(scaledImg.data);
}
```
The performance issue happens in the following code starting on [jsproxy.c line 173](https://github.com/iodide-project/pyodide/blob/master/src/core/jsproxy.c#L713):
```C
static int
JsBuffer_GetBuffer(PyObject* obj, Py_buffer* view, int flags)
{
bool success = false;
JsBuffer* self = (JsBuffer*)obj;
view->obj = NULL;
void* ptr;
if (hiwire_is_on_wasm_heap(JsProxy_REF(self))) {
// This is fast (no copy)
ptr = (void*)hiwire_get_byteOffset(JsProxy_REF(self));
} else {
// Copy the data onto wasm_heap =(
ptr = PyBytes_AsString(self->bytes);
FAIL_IF_NULL(ptr);
hiwire_copy_to_ptr(JsProxy_REF(self), ptr);
}
```
So the trick is to pass an object that `is_on_wasm_heap` already.
> Thanks! This is a very interesting example.
>
> In your particular use case there is a way to fix the performance. I think roughly the following should work:
>
> ```js
> function alloc_matrix_3d(shape0, shape1, shape2){
> let num_bytes = shape0*shape1*shape2;
> let ptr = pyodide._module._PyMem_Malloc(num_bytes);
> let bytes = pyodie._module.HEAP8.subarray(ptr, ptr + num_bytes);
> return [ptr, cv.matFromArray(from_shape[0], from_shape[1], cv.CV_8UC3, bytes)];
> }
>
> function free_ptr(ptr){
> pyodide._module._PyMem_Free(ptr);
> }
>
> function resize_img (img_bytes /* auto converted to Uint8ClampedArray already */, from_shape, to_shape) {
> const img=cv.matFromArray(from_shape[0], from_shape[1], cv.CV_8UC3, img_bytes);
> // Somehow you need to remember to free_ptr later, otherwise you'll leak this memory
> const [ptr, scaledImg] = alloc_matrix_3d(to_shape[1], to_shape[0], cv.CV_8UC3);
> cv.resize(img, scaledImg, new cv.Size(to_shape[1], to_shape[0]), 0, 0, cv.INTER_NEAREST);
> return new Uint8ClampedArray(scaledImg.data);
> }
> ```
It looks so amazing! I'll check it out, thanks!
I think we should make a public API to allocate / free on the pyodide heap. This is not the only performance issue in buffer conversions, but at least in this case where you are already explicitly allocating the memory it's really unfortunate to allocate it in the wrong place and then perform an extra copy because there's no API to allocate in the right place.
Thanks a lot for your reports, these conversations are very helpful. Is your code publicly available somewhere? I'd be interested to look at it.
> Thanks a lot for your reports, these conversations are very helpful. Is your code publicly available somewhere? I'd be interested to look at it.
The original python codes are taken from here:
[https://github.com/STVIR/pysot/blob/2f98bfb6f7b56e4a4263c7024a3a3a7143d630ac/pysot/tracker/base_tracker.py#L88](https://github.com/STVIR/pysot/blob/2f98bfb6f7b56e4a4263c7024a3a3a7143d630ac/pysot/tracker/base_tracker.py#L88)
Example:
```python
pyodide.runPython("x = b'123'");
a = pyodide.pyimport("x"); // a ==> Uint8ClampedArray(3)Β [49, 50, 51]
pyodide.runPython("del x");
a // ==> Uint8ClampedArray(3)Β [64, 0, 0]
```
Roger that and thanks a lot.
Some extra comments: In most cases this API (pyodide.getRawBuffer) works well. When an numpy ndarray is NOT contiguous after some complicated ops, the raw buffer may possibly become unrecognizable by `cv.matFromArray()`, which will cause a "array is too long" exception. The resolution is to make the numpy ndarray contingious again by `numpy.ascontingiousarray()` before passing it as a argument to that opencv method. And another not-so-efficient way is to convert it into bytes before passing into JS. However, the issue has nothing to do with the API.
For instance if you say: `myslice = myarray[::2]`, does `cv.matFromArray` have a problem with it already?
> For instance if you say: `myslice = myarray[::2]`, does `cv.matFromArray` have a problem with it already?
My case is:
```
import numpy as np
a=np.ones(6220800, dtype=np.uint8).reshape(1080,1920,3)
b=a[100:300, 500:800, :]
print(b.flags, b.shape, b.strides)
c=np.ascontiguousarray(b)
print(c.flags, c.shape, c.strides)
```
b is not contiguous and b.strides[0] is not the expected `1 * 3 * 300`, but a wierd 5760.
In JS, if we pass b.getRawBuffer().buffer to cv.matFromArray, an exception will be thrown.
After ascontiguousarray, c is now a normal ndarray.
> For instance if you say: `myslice = myarray[::2]`, does `cv.matFromArray` have a problem with it already?
after your above op, the flags.c_contiguous is still True though the strides changed. In this case, cv.matFromArray will be successful if we don't care about the underlying data.
@hoodmane Is the getRawBuffer() API in Chrome browser any different from that in Firefox? I found a performance lag in Chrome. In Firefox it takes 1ms to run the API, but 10~50 ms in Chome.
python the caller:
```
im_patch = np.ascontiguousarray(im_patch)
scaled_img_raw = js.g_tracker.resize_img(im_patch, (model_sz, model_sz))
```
JS the callee:
```
resize_img (img_pyproxy, to_shape) {
console.log('img_pyproxy=', img_pyproxy, to_shape);
const {buffer, shape, strides, release} = img_pyproxy.getRawBuffer('u8');
console.log('get raw buffer ok');
const img = cv.matFromArray(shape[0], shape[1], cv.CV_8UC3, buffer);
const scaledImg = new cv.Mat();
cv.resize(img, scaledImg, new cv.Size(to_shape[1], to_shape[0]), 0, 0, cv.INTER_NEAREST);
img_pyproxy.destroy();
release();
return new Uint8ClampedArray(scaledImg.data);
}
```
Sample output for Firefox:
```
19:19:51.232 img_pyproxy= Proxy { <target>: target(), <handler>: {β¦} } Array [ 287, 287 ]
19:19:51.233 get raw buffer ok
```
Sample output for Chrome:
```
19:25:01.462 img_pyproxy= ProxyΒ {$$: {β¦}, toString: Ζ, destroy: Ζ, apply: Ζ, shallowCopyToJavascript: Ζ,Β β¦} (2)Β [287, 287]
19:25:01.488 get raw buffer ok
```
> after your above op, the flags.c_contiguous is still True though the strides changed.
No it's not:
```python
>>> a = np.arange(24).reshape([8,3])
>>> b =a[::2]
>>> av = memoryview(a)
>>> bv = memoryview(b)
>>> av.c_contiguous
True
>>> bv.c_contiguous
False
```
The definition of `_IsCContiguous` in `cpython/Objects/abstract.c` is:
```C
sd = view->itemsize;
for (i=view->ndim-1; i>=0; i--) {
dim = view->shape[i];
if (dim > 1 && view->strides[i] != sd) {
return 0;
}
sd *= dim;
}
return 1;
```
> Is the getRawBuffer() API in Chrome browser any different from that in Firefox? I found a performance lag in Chrome. In Firefox it takes 1ms to run the API, but 10~50 ms in Chome.
No clue why you're seeing this. Will try to reproduce.
> > after your above op, the flags.c_contiguous is still True though the strides changed.
>
> No it's not:
>
> ```python
> >>> a = np.arange(24).reshape([8,3])
> >>> b =a[::2]
> >>> av = memoryview(a)
> >>> bv = memoryview(b)
> >>> av.c_contiguous
> True
> >>> bv.c_contiguous
> False
> ```
>
> The definition of `_IsCContiguous` in `cpython/Objects/abstract.c` is:
>
> ```c
> sd = view->itemsize;
> for (i=view->ndim-1; i>=0; i--) {
> dim = view->shape[i];
> if (dim > 1 && view->strides[i] != sd) {
> return 0;
> }
> sd *= dim;
> }
> return 1;
> ```
It's not definite, try this:
```
a=np.arange(30, dtype=np.uint8).reshape(2,5,3)
b=a[::2]
```
> > Is the getRawBuffer() API in Chrome browser any different from that in Firefox? I found a performance lag in Chrome. In Firefox it takes 1ms to run the API, but 10~50 ms in Chome.
>
> No clue why you're seeing this. Will try to reproduce.
Thanks.
Chrome ver: 88.0.4324.150
FireFox ver: 85.0.2
> > Is the getRawBuffer() API in Chrome browser any different from that in Firefox? I found a performance lag in Chrome. In Firefox it takes 1ms to run the API, but 10~50 ms in Chome.
>
> No clue why you're seeing this. Will try to reproduce.
Have you reproduced the issue in Chrome? @hoodmane
I find that it takes around 5~15ms for JS entering the `getRawBuffer` method.
> Module.PyProxyBufferMethods = {
> getRawBuffer : function(type = "u8"){
> console.log( "enter function\n");
> let ArrayType = type_to_array_map.get(type);
> console.log( "type_to_array_map.get done\n");
> if(ArrayType === undefined){
> throw new Error(`Unknown type ${type}`);
> }
> console.log( "before _getPtr\n");
> let this_ptr = _getPtr(this);
> console.log( "before memview_get_buffer\n");
> let buffer_struct_ptr = __pyproxy_memoryview_get_buffer(this_ptr);
> console.log( "after memview_get_buffer\n");
> if(buffer_struct_ptr === 0){
> throw new Error("Failed");
> }
>
the log is:

------PS:------------
It's not always the case, maybe due to some outer factor, so I close it.
@hoodmane I found a bug in your proxy-buffer-procs branch:
src/core/pyproxy.c, line 667:
> if(window.BigInt64Array)
will break the web worker's rule (cannot access window in web worker) and throws exception when in web workers. You could fix it before PR merge.
> if(BigInt64Array)
is just fine to run.
@daoxian Thanks for pointing this out. Could you put comments about improvements to specific PRs on the PR you're talking about?
It should say
```javascript
if(globalThis.BigInt64Array)
``` | 2021-03-12T06:29:33 |
pyodide/pyodide | 1,334 | pyodide__pyodide-1334 | [
"1325"
] | a5e21ba75a6debc235abe027176c2cf5b894ce68 | diff --git a/src/pyodide-py/pyodide/__init__.py b/src/pyodide-py/pyodide/__init__.py
--- a/src/pyodide-py/pyodide/__init__.py
+++ b/src/pyodide-py/pyodide/__init__.py
@@ -1,5 +1,5 @@
from ._base import open_url, eval_code, eval_code_async, find_imports, as_nested_list
-from ._core import JsException # type: ignore
+from ._core import JsException, create_once_callable, create_proxy # type: ignore
from ._importhooks import JsFinder
from .webloop import WebLoopPolicy
import asyncio
@@ -26,4 +26,6 @@
"JsException",
"register_js_module",
"unregister_js_module",
+ "create_once_callable",
+ "create_proxy",
]
diff --git a/src/pyodide-py/pyodide/_core.py b/src/pyodide-py/pyodide/_core.py
--- a/src/pyodide-py/pyodide/_core.py
+++ b/src/pyodide-py/pyodide/_core.py
@@ -1,7 +1,13 @@
import platform
+from typing import Any, Callable
if platform.system() == "Emscripten":
- from _pyodide_core import JsProxy, JsException
+ from _pyodide_core import (
+ JsProxy,
+ JsException,
+ create_proxy,
+ create_once_callable,
+ )
else:
# Can add shims here if we are so inclined.
class JsException(Exception): # type: ignore
@@ -16,5 +22,23 @@ class JsProxy: # type: ignore
# Defined in jsproxy.c
+ # Defined in jsproxy.c
+
+ def create_once_callable(obj: Callable) -> JsProxy:
+ """Wrap a Python callable in a Javascript function that can be called
+ once. After being called the proxy will decrement the reference count
+ of the Callable. The javascript function also has a `destroy` API that
+ can be used to release the proxy without calling it.
+ """
+ return obj
+
+ def create_proxy(obj: Any) -> JsProxy:
+ """Create a `JsProxy` of a `PyProxy`.
+
+ This allows explicit control over the lifetime of the `PyProxy` from
+ Python: call the `destroy` API when done.
+ """
+ return obj
+
-__all__ = ["JsProxy", "JsException"]
+__all__ = ["JsProxy", "JsException", "create_proxy", "create_once_callable"]
diff --git a/src/pyodide-py/pyodide/webloop.py b/src/pyodide-py/pyodide/webloop.py
--- a/src/pyodide-py/pyodide/webloop.py
+++ b/src/pyodide-py/pyodide/webloop.py
@@ -132,11 +132,12 @@ def call_later(
This uses `setTimeout(callback, delay)`
"""
from js import setTimeout
+ from . import create_once_callable
if delay < 0:
raise ValueError("Can't schedule in the past")
h = asyncio.Handle(callback, args, self, context=context)
- setTimeout(h._run, delay * 1000)
+ setTimeout(create_once_callable(h._run), delay * 1000)
return h
def call_at(
| diff --git a/src/tests/test_asyncio.py b/src/tests/test_asyncio.py
--- a/src/tests/test_asyncio.py
+++ b/src/tests/test_asyncio.py
@@ -35,6 +35,122 @@ async def temp():
)
+def test_then_jsproxy(selenium):
+ selenium.run(
+ """
+ def prom(res, rej):
+ global resolve
+ global reject
+ resolve = res
+ reject = rej
+
+ from js import Promise
+ result = None
+ err = None
+ finally_occurred = False
+
+ def onfulfilled(value):
+ global result
+ result = value
+
+ def onrejected(value):
+ global err
+ err = value
+
+ def onfinally():
+ global finally_occurred
+ finally_occurred = True
+ """
+ )
+
+ selenium.run(
+ """
+ p = Promise.new(prom)
+ p.then(onfulfilled, onrejected)
+ resolve(10)
+ """
+ )
+ time.sleep(0.01)
+ selenium.run(
+ """
+ assert result == 10
+ assert err is None
+ result = None
+ """
+ )
+
+ selenium.run(
+ """
+ p = Promise.new(prom)
+ p.then(onfulfilled, onrejected)
+ reject(10)
+ """
+ )
+ time.sleep(0.01)
+ selenium.run(
+ """
+ assert result is None
+ assert err == 10
+ err = None
+ """
+ )
+
+ selenium.run(
+ """
+ p = Promise.new(prom)
+ p.catch(onrejected)
+ resolve(10)
+ """
+ )
+ time.sleep(0.01)
+ selenium.run("assert err is None")
+
+ selenium.run(
+ """
+ p = Promise.new(prom)
+ p.catch(onrejected)
+ reject(10)
+ """
+ )
+ time.sleep(0.01)
+ selenium.run(
+ """
+ assert err == 10
+ err = None
+ """
+ )
+
+ selenium.run(
+ """
+ p = Promise.new(prom)
+ p.finally_(onfinally)
+ resolve(10)
+ """
+ )
+ time.sleep(0.01)
+ selenium.run(
+ """
+ assert finally_occurred
+ finally_occurred = False
+ """
+ )
+
+ selenium.run(
+ """
+ p = Promise.new(prom)
+ p.finally_(onfinally)
+ reject(10)
+ """
+ )
+ time.sleep(0.01)
+ selenium.run(
+ """
+ assert finally_occurred
+ finally_occurred = False
+ """
+ )
+
+
def test_await_fetch(selenium):
selenium.run(
"""
diff --git a/src/tests/test_pyodide.py b/src/tests/test_pyodide.py
--- a/src/tests/test_pyodide.py
+++ b/src/tests/test_pyodide.py
@@ -255,3 +255,87 @@ def test_run_python_last_exc(selenium):
`);
"""
)
+
+
+def test_create_once_callable(selenium):
+ selenium.run_js(
+ """
+ window.call7 = function call7(f){
+ return f(7);
+ }
+ pyodide.runPython(`
+ from pyodide import create_once_callable, JsException
+ from js import call7;
+ from unittest import TestCase
+ raises = TestCase().assertRaisesRegex
+ class Square:
+ def __call__(self, x):
+ return x*x
+
+ def __del__(self):
+ global destroyed
+ destroyed = True
+
+ f = Square()
+ import sys
+ assert sys.getrefcount(f) == 2
+ proxy = create_once_callable(f)
+ assert sys.getrefcount(f) == 3
+ assert call7(proxy) == 49
+ assert sys.getrefcount(f) == 2
+ with raises(JsException, "can only be called once"):
+ call7(proxy)
+ destroyed = False
+ del f
+ assert destroyed == True
+ del proxy # causes a fatal error =(
+ `);
+ """
+ )
+
+
+def test_create_proxy(selenium):
+ selenium.run_js(
+ """
+ window.testAddListener = function(f){
+ window.listener = f;
+ }
+ window.testCallListener = function(f){
+ return window.listener();
+ }
+ window.testRemoveListener = function(f){
+ return window.listener === f;
+ }
+ pyodide.runPython(`
+ from pyodide import create_proxy
+ from js import testAddListener, testCallListener, testRemoveListener;
+ class Test:
+ def __call__(self):
+ return 7
+
+ def __del__(self):
+ global destroyed
+ destroyed = True
+
+ f = Test()
+ import sys
+ assert sys.getrefcount(f) == 2
+ proxy = create_proxy(f)
+ assert sys.getrefcount(f) == 3
+ assert proxy() == 7
+ testAddListener(proxy)
+ assert sys.getrefcount(f) == 3
+ assert testCallListener() == 7
+ assert sys.getrefcount(f) == 3
+ assert testCallListener() == 7
+ assert sys.getrefcount(f) == 3
+ assert testRemoveListener(f)
+ assert sys.getrefcount(f) == 3
+ proxy.destroy()
+ assert sys.getrefcount(f) == 2
+ destroyed = False
+ del f
+ assert destroyed == True
+ `);
+ """
+ )
| Potential memory leak in WebLoop.py
WebLoop.py uses `js.setTimeout` to implement call_* functions. The code is:
```
h = asyncio.Handle(callback, args, self, context=context)
setTimeout(h._run, delay * 1000)
return h
```
https://github.com/iodide-project/pyodide/blob/de7c3420103fabbf01c589fcf2a429d157a6bd5e/src/pyodide-py/pyodide/webloop.py#L139
I think this may result in a memory leak, since the `h._run` JS proxy is never released.
| Yes, that's a memory leak. Thanks for the report! (There's currently a bunch of these =( )
From a design perspective, I'm not sure what the best way to fix leaks like this is. We need control over the lifetime of PyProxy from Python. I think being able to create a JsProxy of a PyProxy would help a lot because then we could directly access the `destroy` api from Python. Once #1306 is merged, this will be mitigated in any browser that supports `FinalizationRegistry` (it might take a long time for the memory to be reclaimed in that way though). | 2021-03-13T23:27:32 |
pyodide/pyodide | 1,376 | pyodide__pyodide-1376 | [
"1374",
"749"
] | 05a84ba3e9a6ed2285abd35c451b51079b595a69 | diff --git a/docs/sphinx_pyodide/sphinx_pyodide/jsdoc.py b/docs/sphinx_pyodide/sphinx_pyodide/jsdoc.py
--- a/docs/sphinx_pyodide/sphinx_pyodide/jsdoc.py
+++ b/docs/sphinx_pyodide/sphinx_pyodide/jsdoc.py
@@ -211,7 +211,7 @@ def get_summary_row(self, pkgname, obj):
The output is designed to be input to format_table. The link name
needs to be set up so that :any:`link_name` makes a link to the
- actual api docs for this object.
+ actual API docs for this object.
"""
sig = self.get_sig(obj)
display_name = obj.name
diff --git a/src/pyodide-py/_pyodide/_core.py b/src/pyodide-py/_pyodide/_core.py
--- a/src/pyodide-py/_pyodide/_core.py
+++ b/src/pyodide-py/_pyodide/_core.py
@@ -48,7 +48,7 @@ def to_py(self) -> Any:
pass
def then(self, onfulfilled: Callable, onrejected: Callable) -> "Promise":
- """The ``Promise.then`` api, wrapped to manage the lifetimes of the
+ """The ``Promise.then`` API, wrapped to manage the lifetimes of the
handlers.
Only available if the wrapped Javascript object has a "then" method.
@@ -57,7 +57,7 @@ def then(self, onfulfilled: Callable, onrejected: Callable) -> "Promise":
"""
def catch(self, onrejected: Callable) -> "Promise":
- """The ``Promise.catch`` api, wrapped to manage the lifetimes of the
+ """The ``Promise.catch`` API, wrapped to manage the lifetimes of the
handler.
Only available if the wrapped Javascript object has a "then" method.
@@ -66,7 +66,7 @@ def catch(self, onrejected: Callable) -> "Promise":
"""
def finally_(self, onfinally: Callable) -> "Promise":
- """The ``Promise.finally`` api, wrapped to manage the lifetimes of
+ """The ``Promise.finally`` API, wrapped to manage the lifetimes of
the handler.
Only available if the wrapped Javascript object has a "then" method.
| diff --git a/packages/numpy/test_numpy.py b/packages/numpy/test_numpy.py
--- a/packages/numpy/test_numpy.py
+++ b/packages/numpy/test_numpy.py
@@ -40,8 +40,23 @@ def test_typed_arrays(selenium):
)
-def test_python2js_numpy_dtype(selenium_standalone):
- selenium = selenium_standalone
[email protected]("order", ("C", "F"))
[email protected](
+ "dtype",
+ (
+ "int8",
+ "uint8",
+ "int16",
+ "uint16",
+ "int32",
+ "uint32",
+ "int64",
+ "uint64",
+ "float32",
+ "float64",
+ ),
+)
+def test_python2js_numpy_dtype(selenium, order, dtype):
selenium.load_package("numpy")
selenium.run("import numpy as np")
@@ -56,59 +71,37 @@ def assert_equal():
for k in range(2):
assert (
selenium.run_js(
- f"return pyodide.globals.get('x').toJs()[{i}][{j}][{k}]"
+ f"return Number(pyodide.globals.get('x').toJs()[{i}][{j}][{k}])"
)
== expected_result[i][j][k]
)
- for order in ("C", "F"):
- for dtype in (
- "int8",
- "uint8",
- "int16",
- "uint16",
- "int32",
- "uint32",
- "int64",
- "uint64",
- "float32",
- "float64",
- ):
- selenium.run(
- f"""
- x = np.arange(8, dtype=np.{dtype})
- x = x.reshape((2, 2, 2))
- x = x.copy({order!r})
- """
- )
- assert_equal()
- classname = selenium.run_js(
- "return pyodide.globals.get('x').toJs()[0][0].constructor.name"
- )
- if order == "C" and dtype not in ("uint64", "int64"):
- # Here we expect a TypedArray subclass, such as Uint8Array, but
- # not a plain-old Array
- assert classname.endswith("Array")
- assert classname != "Array"
- else:
- assert classname == "Array"
- selenium.run(
- """
- x = x.byteswap().newbyteorder()
- """
- )
- assert_equal()
- classname = selenium.run_js(
- "return pyodide.globals.get('x').toJs()[0][0].constructor.name"
- )
- if order == "C" and dtype in ("int8", "uint8"):
- # Here we expect a TypedArray subclass, such as Uint8Array, but
- # not a plain-old Array -- but only for single byte types where
- # endianness doesn't matter
- assert classname.endswith("Array")
- assert classname != "Array"
- else:
- assert classname == "Array"
+ selenium.run(
+ f"""
+ x = np.arange(8, dtype=np.{dtype})
+ x = x.reshape((2, 2, 2))
+ x = x.copy({order!r})
+ """
+ )
+ assert_equal()
+ classname = selenium.run_js(
+ "return pyodide.globals.get('x').toJs()[0][0].constructor.name"
+ )
+ # We expect a TypedArray subclass, such as Uint8Array, but not a plain-old
+ # Array
+ assert classname.endswith("Array")
+ assert classname != "Array"
+ selenium.run(
+ """
+ x = x.byteswap().newbyteorder()
+ """
+ )
+ assert_equal()
+ classname = selenium.run_js(
+ "return pyodide.globals.get('x').toJs()[0][0].constructor.name"
+ )
+ assert classname.endswith("Array")
+ assert classname != "Array"
assert selenium.run("np.array([True, False])") == [True, False]
@@ -126,13 +119,9 @@ def test_py2js_buffer_clear_error_flag(selenium):
)
-def test_python2js_numpy_scalar(selenium_standalone):
- selenium = selenium_standalone
-
- selenium.load_package("numpy")
- selenium.run("import numpy as np")
-
- for dtype in (
[email protected](
+ "dtype",
+ (
"int8",
"uint8",
"int16",
@@ -143,33 +132,38 @@ def test_python2js_numpy_scalar(selenium_standalone):
"uint64",
"float32",
"float64",
- ):
- selenium.run(
- f"""
- x = np.{dtype}(1)
- """
- )
- assert (
- selenium.run_js(
- """
- return pyodide.globals.get('x') == 1
- """
- )
- is True
- )
- selenium.run(
- """
- x = x.byteswap().newbyteorder()
+ ),
+)
+def test_python2js_numpy_scalar(selenium, dtype):
+
+ selenium.load_package("numpy")
+ selenium.run("import numpy as np")
+ selenium.run(
+ f"""
+ x = np.{dtype}(1)
+ """
+ )
+ assert (
+ selenium.run_js(
"""
+ return pyodide.globals.get('x') == 1
+ """
)
- assert (
- selenium.run_js(
- """
- return pyodide.globals.get('x') == 1
+ is True
+ )
+ selenium.run(
+ """
+ x = x.byteswap().newbyteorder()
+ """
+ )
+ assert (
+ selenium.run_js(
"""
- )
- is True
+ return pyodide.globals.get('x') == 1
+ """
)
+ is True
+ )
def test_runpythonasync_numpy(selenium_standalone):
| Remove or change toJs behavior for buffers?
If we create a PyProxy of a bytes object and then run the `toJs` method on the `PyProxy`, the resulting `TypedArray` allows the user to modify the `bytes` object even though it's read only and might be shared with a bunch of other places. It also leads to potential use after free errors, and can violate expectations because in every other case `toJs` either copies stuff or returns a `PyProxy` (the other biggest problem with `toJs` is that it creates an unpredictable number of `PyProxy` objects that must be located and freed in order to avoid a leak).
Perhaps we should convert `toJs` to exclusively copy the buffer into a Javascript array, so that `toJs` is slow but safe and returns easy to use buffers whereas `getBuffer` is fast and unsafe but only does the one thing so anyone who reads the docs will see the warnings about modifying readonly buffers.
problem with sending bytes to javascript
Is there any restriction on sending bytes from python to javascript?
```javascript
pyodide.runPython('bytes([10])')
> Uint8ClampedArrayΒ [10]
pyodide.runPython('bytes([10, 11])')
> Uint8ClampedArray(2)Β [24, 0]
pyodide.runPython('bytes([10, 11, 12])')
> Uint8ClampedArray(3)Β [24, 0, 0]
pyodide.runPython('bytes([10, 11, 12, 13])')
> Uint8ClampedArray(4)Β [10, 11, 12, 13]
pyodide.runPython('b"A"')
> Uint8ClampedArrayΒ [65]
pyodide.runPython('b"AB"')
> Uint8ClampedArray(2)Β [24, 0]
pyodide.runPython('b"ABC"')
> Uint8ClampedArray(3)Β [24, 0, 0]
pyodide.runPython('b"ABCD"')
> Uint8ClampedArray(4)Β [65, 66, 67, 68]
```
What's `[24, 0]`?
It's different for different lengths:
```javascript
pyodide.runPython("b'0' * 4")
Uint8ClampedArray(4)Β [48, 48, 48, 48]
pyodide.runPython("b'0' * 100")
Uint8ClampedArray(100)Β [48, 48, 48, 48, 48, 48, 48, 48, 48, 48, ...
pyodide.runPython("b'0' * 200")
Uint8ClampedArray(200)Β [48, 48, 48, 48, 48, 48, 48, 48, 48, 48, ...
pyodide.runPython("b'0' * 300")
Uint8ClampedArray(300)Β [88, 51, 157, 0, 0, 0, 0, 0, 48, 48, 48, ...
```
It's `[88, 51, 157, 0, 0, 0, 0, 0,` this time.
Seems that there is no problem from javascript to python:
```javascript
f = pyodide.runPython(`
def f(x):
print([x, x.tobytes()])
return x
f
`)
> ProxyΒ {$$: {β¦}, length: 0, name: "target", arguments: null, caller: null,Β β¦}
f(new Uint8Array([0]))
> pyodide.asm.js:8 [<memory at 0xa7d308>, b'\x00']
> Uint8ArrayΒ [0]
f(new Uint8Array([0, 1]))
> pyodide.asm.js:8 [<memory at 0xa7d308>, b'\x00\x01']
> Uint8Array(2)Β [0, 1]
f(new Uint8Array([0, 1, 2]))
> pyodide.asm.js:8 [<memory at 0xa7d308>, b'\x00\x01\x02']
> Uint8Array(3)Β [0, 1, 2]
f(new Uint8Array([0, 1, 2, 3]))
> pyodide.asm.js:8 [<memory at 0xa7d308>, b'\x00\x01\x02\x03']
> Uint8Array(4)Β [0, 1, 2, 3]
```
I tested on this version:
```
pyodide.version(): "0.15.0"
pyodide.runPython('sys.version'): "3.7.4 (default, May 20 2020, 19:25:33) [Clang 6.0.1 ]"
Chrome 83
Firefox 76
```
| I think in any case we need a simple way of passing bytes to JS without hacking with the buffer protocol API.
> Perhaps we should convert toJs to exclusively copy the buffer into a Javascript array, so that toJs is slow but safe
Sounds good to me. Generally a single memory copy shouldn't be that bad performance wise. There are plenty copies as it is in Python / scientific python code anyway.
Okay I'll open a PR for this soon. One question: for outer level arrays we have to use Javascript `Array`, but maybe for the innermost layer we should stick to using a `TypedArray`?
Currently `python2js_buffer` has three paths: `NOT_SHAREABLE`, `CONTIGUOUS`, and `NOT_CONTIGUOUS`. I think I could just delete the `CONTIGUOUS` and `NOT_CONTIGUOUS` branches and keep the `NOT_SHAREABLE` branch as-is.
Thanks for the report! I can reproduce. For instance `b"AB"` produces indeed,
- `Uint8ClampedArray [ 24, 0 ]` with v0.15.0 (Python 3.7)
- `Uint8ClampedArray [ 0, 8 ]` on master (Python 3.8)
That doesn't look right. That pattern depending on the input length is also quite strange. There is a test that checks this conversion [here](https://github.com/iodide-project/pyodide/blob/e7dde959b8a6666a257f4d04e0a828c038b7416c/test/test_python.py#L35). It would be good to add another one with your `b"AB"` example.
The conversion should go though [_python2js_bytes](https://github.com/iodide-project/pyodide/blob/e7dde959b8a6666a257f4d04e0a828c038b7416c/src/python2js.c#L153) and then [hiwire_bytes](https://github.com/iodide-project/pyodide/blob/76efcaccf7cb0f478ed1f17fd36cd79f3370980e/src/hiwire.c#L87). I don't see any evident issues that could produce this result there, but it would be certainly worth investigating in more detail.
Assigning the value to a variable changes this behavior. Not a fix, but may help to find the issue.
```javascript
pyodide.runPython('b"ABC"')
Uint8ClampedArray(3)Β [24, 0, 0]
pyodide.runPython('x = b"ABC"; x')
Uint8ClampedArray(3)Β [65, 66, 67]
```
I hit the same bug and tried couple of things.
I assigned the result into Javascript variable and run other Python scripts. The content of the `Uint8ClampedArray` changed.
I believe it may be related to garbage collector. When it is not assigned to a variable, Python reclaims its memory. | 2021-03-26T22:16:24 |
pyodide/pyodide | 1,457 | pyodide__pyodide-1457 | [
"725"
] | 345ab910f15df5ec0e1a8f6c066f5cae5cc8f435 | diff --git a/packages/micropip/micropip/micropip.py b/packages/micropip/micropip/micropip.py
--- a/packages/micropip/micropip/micropip.py
+++ b/packages/micropip/micropip/micropip.py
@@ -57,6 +57,14 @@ async def gather(*coroutines): # type: ignore
return result
+if IN_BROWSER:
+ from pyodide_js import loadedPackages
+else:
+
+ class loadedPackages: # type: ignore
+ pass
+
+
async def _get_pypi_json(pkgname):
url = f"https://pypi.org/pypi/{pkgname}/json"
fd = await _get_url(url)
@@ -112,6 +120,7 @@ async def _install_wheel(name, fileinfo):
wheel = await _get_url(url)
_validate_wheel(wheel, fileinfo)
_extract_wheel(wheel)
+ setattr(loadedPackages, name, url)
class _PackageManager:
@@ -155,14 +164,15 @@ async def install(self, requirements: Union[str, List[str]], ctx=None):
# Note: branch never happens in out-of-browser testing because we
# report that all dependencies are empty.
self.installed_packages.update(dict((k, None) for k in pyodide_packages))
- wheel_promises.append(pyodide_js.loadPackage(list(pyodide_packages)))
+ wheel_promises.append(
+ asyncio.ensure_future(pyodide_js.loadPackage(list(pyodide_packages)))
+ )
# Now install PyPI packages
for name, wheel, ver in transaction["wheels"]:
wheel_promises.append(_install_wheel(name, wheel))
self.installed_packages[name] = ver
await gather(*wheel_promises)
- return f'Installed {", ".join(self.installed_packages.keys())}'
async def add_requirement(self, requirement: str, ctx, transaction):
if requirement.endswith(".whl"):
@@ -245,8 +255,8 @@ def install(requirements: Union[str, List[str]]):
----------
requirements : ``str | List[str]``
- A requirement or list of requirements to install. Each requirement is a string, which should be either
- a package name or URL to a wheel:
+ A requirement or list of requirements to install. Each requirement is a
+ string, which should be either a package name or URL to a wheel:
- If the requirement ends in ``.whl`` it will be interpreted as a URL.
The file must be a wheel named in compliance with the
@@ -260,8 +270,8 @@ def install(requirements: Union[str, List[str]]):
-------
``Future``
- A ``Future`` that resolves to ``None`` when all packages have
- been downloaded and installed.
+ A ``Future`` that resolves to ``None`` when all packages have been
+ downloaded and installed.
"""
importlib.invalidate_caches()
return asyncio.ensure_future(PACKAGE_MANAGER.install(requirements))
| diff --git a/packages/micropip/test_micropip.py b/packages/micropip/test_micropip.py
--- a/packages/micropip/test_micropip.py
+++ b/packages/micropip/test_micropip.py
@@ -170,3 +170,25 @@ def __init__(self, **entries):
wheel, ver = micropip.PACKAGE_MANAGER.find_wheel(metadata, requirement)
assert ver == "0.15.5"
+
+
+def test_install_different_version(selenium_standalone_micropip):
+ selenium = selenium_standalone_micropip
+ selenium.run_js(
+ """
+ await pyodide.runPythonAsync(`
+ import micropip
+ await micropip.install(
+ "https://files.pythonhosted.org/packages/89/06/2c2d3034b4d6bf22f2a4ae546d16925898658a33b4400cfb7e2c1e2871a3/pytz-2020.5-py2.py3-none-any.whl"
+ );
+ `);
+ """
+ )
+ selenium.run_js(
+ """
+ pyodide.runPython(`
+ import pytz
+ assert pytz.__version__ == "2020.5"
+ `);
+ """
+ )
| Version selection for packages availble both on PyPi and in Pyodide
For packages not built in pyodide, version selection works as expected. For instance,
```py
>>> import micropip
>>> micropip.install('idna==2.9') # version before last on PyPi, package not in pyodide
Installed idna
>>> import idna
>>> idna.__version__
2.9
```
However, when one specifies the version for a package available in the pyodide distribution, it is ignored and the version from pyodide is installed regardless if PyPi includes the requested version,
```py
>>> import micropip
>>> micropip.install('pytz==2020.1')
Installed pytz
>>> import pytz
>>> pytz.__version__
2019.3
```
| Another manifestation of this bug, is when installing a package from a URL, it would still ignore the URL and load the package from pyodide if it's available there,
```py
>>> import micropip
>>> micropip.install('https://files.pythonhosted.org/packages/4f/a4/879454d49688e2fad93e59d7d4efda580b783c745fd2ec2a3adf87b0808d/pytz-2020.1-py2.py3-none-any.whl')
Installed pytz
>>> import pytz
>>> pytz.__version__
2019.3
``` | 2021-04-12T18:33:03 |
pyodide/pyodide | 1,483 | pyodide__pyodide-1483 | [
"1481"
] | 1485732211816a545887b154fc41ce3781785ff7 | diff --git a/packages/micropip/micropip/micropip.py b/packages/micropip/micropip/micropip.py
--- a/packages/micropip/micropip/micropip.py
+++ b/packages/micropip/micropip/micropip.py
@@ -9,6 +9,7 @@
from packaging.requirements import Requirement
from packaging.version import Version
+from packaging.markers import default_environment
# Provide stubs for testing in native python
try:
@@ -133,7 +134,9 @@ def __init__(self):
self.builtin_packages = {}
self.installed_packages = {}
- async def install(self, requirements: Union[str, List[str]], ctx=None):
+ async def gather_requirements(self, requirements: Union[str, List[str]], ctx=None):
+ ctx = ctx or default_environment()
+ ctx.setdefault("extra", None)
if isinstance(requirements, str):
requirements = [requirements]
@@ -149,9 +152,11 @@ async def install(self, requirements: Union[str, List[str]], ctx=None):
)
await gather(*requirement_promises)
+ return transaction
+ async def install(self, requirements: Union[str, List[str]], ctx=None):
+ transaction = await self.gather_requirements(requirements, ctx)
wheel_promises = []
-
# Install built-in packages
pyodide_packages = transaction["pyodide_packages"]
if len(pyodide_packages):
| diff --git a/packages/micropip/test_micropip.py b/packages/micropip/test_micropip.py
--- a/packages/micropip/test_micropip.py
+++ b/packages/micropip/test_micropip.py
@@ -87,24 +87,29 @@ def test_install_custom_url(selenium_standalone_micropip, web_server_tst_data):
)
-def test_add_requirement_relative_url():
- pytest.importorskip("packaging")
- import micropip
-
- transaction = {"wheels": []}
- coroutine = micropip.PACKAGE_MANAGER.add_requirement(
- "./snowballstemmer-2.0.0-py2.py3-none-any.whl", {}, transaction
- )
+def run_sync(coroutine):
# The following is a way to synchronously run a coroutine that does only
- # synchronous operations (and assert that it indeed only did synch
+ # synchronous operations (and assert that it indeed only did sync
# operations)
try:
coroutine.send(None)
- except StopIteration as _result:
- pass
+ except StopIteration as result:
+ return result.args and result.args[0]
else:
raise Exception("Coroutine didn't finish in one pass")
+
+def test_add_requirement_relative_url():
+ pytest.importorskip("packaging")
+ import micropip
+
+ transaction = {"wheels": []}
+ run_sync(
+ micropip.PACKAGE_MANAGER.add_requirement(
+ "./snowballstemmer-2.0.0-py2.py3-none-any.whl", {}, transaction
+ )
+ )
+
[name, req, version] = transaction["wheels"][0]
assert name == "snowballstemmer"
assert version == "2.0.0"
@@ -116,6 +121,28 @@ def test_add_requirement_relative_url():
assert req["url"] == "./snowballstemmer-2.0.0-py2.py3-none-any.whl"
+def test_add_requirement_marker():
+ pytest.importorskip("packaging")
+ import micropip
+
+ transaction = run_sync(
+ micropip.PACKAGE_MANAGER.gather_requirements(
+ [
+ "werkzeug",
+ 'contextvars ; python_version < "3.7"',
+ 'aiocontextvars ; python_version < "3.7"',
+ "numpy ; extra == 'full'",
+ "zarr ; extra == 'full'",
+ "numpy ; extra == 'jupyter'",
+ "ipykernel ; extra == 'jupyter'",
+ "numpy ; extra == 'socketio'",
+ "python-socketio[client] ; extra == 'socketio'",
+ ]
+ )
+ )
+ assert len(transaction["wheels"]) == 1
+
+
def test_install_custom_relative_url(selenium_standalone_micropip):
selenium = selenium_standalone_micropip
root = Path(__file__).resolve().parents[2]
| Issue in using micropip to install pakcage with extra_requires
With the pyodide v0.17.0a2, I have some issues in using loading `werkzeug`. It's wired that it tries to load `pytest` which is not the default runtime dependency (see [setup.py](https://github.com/pallets/werkzeug/blob/master/setup.py)).
`pytest` should be only activated when we do `pip install wekzeug[dev]`.
You can reproduce this with the console: https://pyodide-cdn2.iodide.io/v0.17.0a2/full/console.html
```python
import micropip
micropip.install('werkzeug')
```
In the console it complains `Skipping unknown package ['pytest']`.
| Thanks for the report! I think this bug was fixed in #1457 or #1463. Indeed the problem doesn't occur in `dev/full/console.html`.
Great! I tried with dev console and indeed it works now. Thanks you @hoodmane !
Hi, I noticed that I cannot get the actual package to work if, see below:
With the dev console at https://pyodide-cdn2.iodide.io/dev/full/console.html
If I start to add `await` before `micropip.install` I got an error:
```python
>>> import micropip
>>> await micropip.install(["werkzeug", "imjoy-rpc"])
Traceback (most recent call last):
β File "/lib/python3.8/site-packages/pyodide/console.py", line 233, i
n load_packages_and_run
result = await eval_code_async(source, self.locals)
β File "/lib/python3.8/site-packages/pyodide/_base.py", line 419, in
eval_code_async
return await CodeRunner(
β File "/lib/python3.8/site-packages/pyodide/_base.py", line 278, in
run_async
res = await res
β File "<exec>", line 1, in <module>
β File "/lib/python3.8/asyncio/futures.py", line 260, in __await__
yield self # This tells Task to wait for completion.
β File "/lib/python3.8/asyncio/tasks.py", line 349, in __wakeup
future.result()
β File "/lib/python3.8/asyncio/futures.py", line 178, in result
raise self._exception
β File "/lib/python3.8/asyncio/tasks.py", line 282, in __step
result = coro.throw(exc)
β File "/lib/python3.8/site-packages/micropip.py", line 151, in insta
ll
await gather(*requirement_promises)
β File "/lib/python3.8/asyncio/futures.py", line 260, in __await__
yield self # This tells Task to wait for completion.
β File "/lib/python3.8/asyncio/tasks.py", line 349, in __wakeup
future.result()
β File "/lib/python3.8/asyncio/futures.py", line 178, in result
raise self._exception
β File "/lib/python3.8/asyncio/tasks.py", line 280, in __step
result = coro.send(None)
β File "/lib/python3.8/site-packages/micropip.py", line 222, in add_r
equirement
await self.add_requirement(recurs_req, ctx, transaction)
β File "/lib/python3.8/site-packages/micropip.py", line 202, in add_r
equirement
if not req.marker.evaluate(ctx):
β File "/lib/python3.8/site-packages/packaging/markers.py", line 328,
in evaluate
return _evaluate_markers(self._markers, current_environment)
β File "/lib/python3.8/site-packages/packaging/markers.py", line 244,
in _evaluate_markers
lhs_value = _get_env(environment, lhs.value)
β File "/lib/python3.8/site-packages/packaging/markers.py", line 224,
in _get_env
raise UndefinedEnvironmentName(
βpackaging.markers.UndefinedEnvironmentName: 'extra' does not exist in
evaluation environment.
```
@hoodmane Could you please take a look?
I think this is a quick fix. I believe this is a regression introduced in #1463. | 2021-04-17T23:21:06 |
pyodide/pyodide | 1,538 | pyodide__pyodide-1538 | [
"1496"
] | 41b4713120d75e21419f759b223f08bcd57e088c | diff --git a/conftest.py b/conftest.py
--- a/conftest.py
+++ b/conftest.py
@@ -181,6 +181,7 @@ def run(self, code):
def run_async(self, code):
return self.run_js(
f"""
+ await pyodide.loadPackagesFromImports({code!r})
let result = await pyodide.runPythonAsync({code!r});
if(result && result.toJs){{
let converted_result = result.toJs();
| diff --git a/packages/micropip/test_micropip.py b/packages/micropip/test_micropip.py
--- a/packages/micropip/test_micropip.py
+++ b/packages/micropip/test_micropip.py
@@ -14,7 +14,7 @@ def selenium_standalone_micropip(selenium_standalone):
selenium_standalone.run_js(
"""
await pyodide.loadPackage("micropip");
- await pyodide.runPythonAsync("import micropip");
+ await pyodide.runPython("import micropip");
"""
)
yield selenium_standalone
diff --git a/packages/numpy/test_numpy.py b/packages/numpy/test_numpy.py
--- a/packages/numpy/test_numpy.py
+++ b/packages/numpy/test_numpy.py
@@ -197,7 +197,8 @@ def test_runwebworker_numpy(selenium_webworker_standalone):
def test_get_buffer(selenium):
selenium.run_js(
"""
- await pyodide.runPythonAsync(`
+ await pyodide.loadPackage(['numpy']);
+ await pyodide.runPython(`
import numpy as np
x = np.arange(24)
z1 = x.reshape([8,3])
@@ -246,7 +247,8 @@ def test_get_buffer(selenium):
def test_get_buffer_roundtrip(selenium, arg):
selenium.run_js(
f"""
- await pyodide.runPythonAsync(`
+ await pyodide.loadPackage(['numpy']);
+ await pyodide.runPython(`
import numpy as np
x = {arg}
`);
@@ -288,7 +290,8 @@ def test_get_buffer_roundtrip(selenium, arg):
def test_get_buffer_big_endian(selenium):
selenium.run_js(
"""
- window.a = await pyodide.runPythonAsync(`
+ await pyodide.loadPackage(['numpy']);
+ window.a = await pyodide.runPython(`
import numpy as np
np.arange(24, dtype="int16").byteswap().newbyteorder()
`);
@@ -315,7 +318,8 @@ def test_get_buffer_error_messages(selenium):
with pytest.raises(Exception, match="Javascript has no Float16 support"):
selenium.run_js(
"""
- await pyodide.runPythonAsync(`
+ await pyodide.loadPackage(['numpy']);
+ await pyodide.runPython(`
import numpy as np
x = np.ones(2, dtype=np.float16)
`);
| Remove loadPackagesFromImports from runPythonAsync?
This would be a backwards incompatible API change but I think it's for the best.
| 2021-04-23T12:33:16 |
|
pyodide/pyodide | 1,604 | pyodide__pyodide-1604 | [
"1589"
] | b61996120299dc5d30a19439a7244c1e4c9ec7e4 | diff --git a/conftest.py b/conftest.py
--- a/conftest.py
+++ b/conftest.py
@@ -109,7 +109,7 @@ def javascript_setup(self):
self.run_js("Error.stackTraceLimit = Infinity;", pyodide_checks=False)
self.run_js(
"""
- window.assert = function assert(cb, message=""){
+ window.assert = function(cb, message=""){
if(message !== ""){
message = "\\n" + message;
}
@@ -117,37 +117,55 @@ def javascript_setup(self):
throw new Error(`Assertion failed: ${cb.toString().slice(6)}${message}`);
}
};
- window.assertThrows = function assert(cb, errname, pattern){
- let pat_str = typeof pattern === "string" ? `"${pattern}"` : `${pattern}`;
- let thiscallstr = `assertThrows(${cb.toString()}, "${errname}", ${pat_str})`;
+ window.assertAsync = async function(cb, message=""){
+ if(message !== ""){
+ message = "\\n" + message;
+ }
+ if(await cb() !== true){
+ throw new Error(`Assertion failed: ${cb.toString().slice(12)}${message}`);
+ }
+ };
+ function checkError(err, errname, pattern, pat_str, thiscallstr){
if(typeof pattern === "string"){
pattern = new RegExp(pattern);
}
- let err = undefined;
- try {
- cb();
- } catch(e) {
- err = e;
- }
- console.log(err ? err.message : "no error");
if(!err){
- console.log("hi?");
throw new Error(`${thiscallstr} failed, no error thrown`);
}
if(err.constructor.name !== errname){
- console.log(err.toString());
throw new Error(
`${thiscallstr} failed, expected error ` +
`of type '${errname}' got type '${err.constructor.name}'`
);
}
if(!pattern.test(err.message)){
- console.log(err.toString());
throw new Error(
`${thiscallstr} failed, expected error ` +
`message to match pattern ${pat_str} got:\n${err.message}`
);
}
+ }
+ window.assertThrows = function(cb, errname, pattern){
+ let pat_str = typeof pattern === "string" ? `"${pattern}"` : `${pattern}`;
+ let thiscallstr = `assertThrows(${cb.toString()}, "${errname}", ${pat_str})`;
+ let err = undefined;
+ try {
+ cb();
+ } catch(e) {
+ err = e;
+ }
+ checkError(err, errname, pattern, pat_str, thiscallstr);
+ };
+ window.assertThrowsAsync = async function(cb, errname, pattern){
+ let pat_str = typeof pattern === "string" ? `"${pattern}"` : `${pattern}`;
+ let thiscallstr = `assertThrowsAsync(${cb.toString()}, "${errname}", ${pat_str})`;
+ let err = undefined;
+ try {
+ await cb();
+ } catch(e) {
+ err = e;
+ }
+ checkError(err, errname, pattern, pat_str, thiscallstr);
};
""",
pyodide_checks=False,
| diff --git a/src/tests/test_pyproxy.py b/src/tests/test_pyproxy.py
--- a/src/tests/test_pyproxy.py
+++ b/src/tests/test_pyproxy.py
@@ -392,8 +392,11 @@ def test_pyproxy_mixins2(selenium):
assert(() => !("length" in get_method));
assert(() => !("name" in get_method));
- assert(() => pyodide.globals.get.type === "builtin_function_or_method");
- assert(() => pyodide.globals.set.type === undefined);
+ let d = pyodide.runPython("{}");
+ assert(() => d.$get.type === "builtin_function_or_method");
+ assert(() => d.get.type === undefined);
+ assert(() => d.set.type === undefined);
+ d.destroy();
let [Test, t] = pyodide.runPython(`
class Test: pass
@@ -432,7 +435,8 @@ class Test:
length=7
[Test, Test()]
`);
- assert(() => Test.prototype === "prototype");
+ assert(() => Test.$prototype === "prototype");
+ assert(() => Test.prototype === undefined);
assert(() => Test.name==="me");
assert(() => Test.length === 7);
@@ -451,8 +455,8 @@ def __len__(self):
assert(() => !("length" in Test));
assert(() => t.length === 9);
t.length = 10;
- assert(() => t.length === 10);
- assert(() => t.__len__() === 9);
+ assert(() => t.length === 9);
+ assert(() => t.$length === 10);
let l = pyodide.runPython(`
l = [5, 6, 7] ; l
@@ -612,19 +616,6 @@ def test_pyproxy_copy(selenium):
def test_errors(selenium):
selenium.run_js(
"""
- function expect_error(func){
- let error = false;
- try {
- func();
- } catch(e) {
- if(e.name === "PythonError"){
- error = true;
- }
- }
- if(!error){
- throw new Error(`No PythonError ocurred: ${func.toString().slice(6)}`);
- }
- }
let t = pyodide.runPython(`
def te(self, *args, **kwargs):
raise Exception(repr(args))
@@ -644,19 +635,19 @@ class Temp:
__repr__ = te
Temp()
`);
- expect_error(() => t.x);
- expect_error(() => t.x = 2);
- expect_error(() => delete t.x);
- expect_error(() => Object.getOwnPropertyNames(t));
- expect_error(() => t());
- expect_error(() => t.get(1));
- expect_error(() => t.set(1, 2));
- expect_error(() => t.delete(1));
- expect_error(() => t.has(1));
- expect_error(() => t.length);
- expect_error(() => t.then(()=>{}));
- expect_error(() => t.toString());
- expect_error(() => Array.from(t));
+ assertThrows(() => t.x, "PythonError", "");
+ assertThrows(() => t.x = 2, "PythonError", "");
+ assertThrows(() => delete t.x, "PythonError", "");
+ assertThrows(() => Object.getOwnPropertyNames(t), "PythonError", "");
+ assertThrows(() => t(), "PythonError", "");
+ assertThrows(() => t.get(1), "PythonError", "");
+ assertThrows(() => t.set(1, 2), "PythonError", "");
+ assertThrows(() => t.delete(1), "PythonError", "");
+ assertThrows(() => t.has(1), "PythonError", "");
+ assertThrows(() => t.length, "PythonError", "");
+ assertThrowsAsync(async () => await t, "PythonError", "");
+ assertThrows(() => t.toString(), "PythonError", "");
+ assertThrows(() => Array.from(t), "PythonError", "");
"""
)
@@ -666,66 +657,69 @@ def test_fatal_error(selenium_standalone):
selenium_standalone.run_js(
"""
let fatal_error = false;
+ let old_fatal_error = pyodide._module.fatal_error;
pyodide._module.fatal_error = (e) => {
fatal_error = true;
throw e;
}
- function expect_fatal(func){
- fatal_error = false;
- try {
- func();
- } catch(e) {
- // pass
- } finally {
- if(!fatal_error){
- throw new Error(`No fatal error occured: ${func.toString().slice(6)}`);
+ try {
+ function expect_fatal(func){
+ fatal_error = false;
+ try {
+ func();
+ } catch(e) {
+ // pass
+ } finally {
+ if(!fatal_error){
+ throw new Error(`No fatal error occured: ${func.toString().slice(6)}`);
+ }
}
}
+ let t = pyodide.runPython(`
+ from _pyodide_core import trigger_fatal_error
+ def tfe(*args, **kwargs):
+ trigger_fatal_error()
+ class Temp:
+ __getattr__ = tfe
+ __setattr__ = tfe
+ __delattr__ = tfe
+ __dir__ = tfe
+ __call__ = tfe
+ __getitem__ = tfe
+ __setitem__ = tfe
+ __delitem__ = tfe
+ __iter__ = tfe
+ __len__ = tfe
+ __contains__ = tfe
+ __await__ = tfe
+ __repr__ = tfe
+ __del__ = tfe
+ Temp()
+ `);
+ expect_fatal(() => "x" in t);
+ expect_fatal(() => t.x);
+ expect_fatal(() => t.x = 2);
+ expect_fatal(() => delete t.x);
+ expect_fatal(() => Object.getOwnPropertyNames(t));
+ expect_fatal(() => t());
+ expect_fatal(() => t.get(1));
+ expect_fatal(() => t.set(1, 2));
+ expect_fatal(() => t.delete(1));
+ expect_fatal(() => t.has(1));
+ expect_fatal(() => t.length);
+ expect_fatal(() => t.toString());
+ expect_fatal(() => Array.from(t));
+ t.destroy();
+ a = pyodide.runPython(`
+ from array import array
+ array("I", [1,2,3,4])
+ `);
+ b = a.getBuffer();
+ b._view_ptr = 1e10;
+ expect_fatal(() => b.release());
+ } finally {
+ pyodide._module.fatal_error = old_fatal_error;
}
- let t = pyodide.runPython(`
- from _pyodide_core import trigger_fatal_error
- def tfe(*args, **kwargs):
- trigger_fatal_error()
- class Temp:
- __getattr__ = tfe
- __setattr__ = tfe
- __delattr__ = tfe
- __dir__ = tfe
- __call__ = tfe
- __getitem__ = tfe
- __setitem__ = tfe
- __delitem__ = tfe
- __iter__ = tfe
- __len__ = tfe
- __contains__ = tfe
- __await__ = tfe
- __repr__ = tfe
- __del__ = tfe
- Temp()
- `);
- expect_fatal(() => "x" in t);
- expect_fatal(() => t.x);
- expect_fatal(() => t.x = 2);
- expect_fatal(() => delete t.x);
- expect_fatal(() => Object.getOwnPropertyNames(t));
- expect_fatal(() => t());
- expect_fatal(() => t.get(1));
- expect_fatal(() => t.set(1, 2));
- expect_fatal(() => t.delete(1));
- expect_fatal(() => t.has(1));
- expect_fatal(() => t.length);
- expect_fatal(() => t.then(()=>{}));
- expect_fatal(() => t.toString());
- expect_fatal(() => Array.from(t));
- expect_fatal(() => t.destroy());
- expect_fatal(() => t.destroy());
- a = pyodide.runPython(`
- from array import array
- array("I", [1,2,3,4])
- `);
- b = a.getBuffer();
- b._view_ptr = 1e10;
- expect_fatal(() => b.release());
"""
)
@@ -772,3 +766,23 @@ def assert_call(s, val):
msg = r"TypeError: f\(\) got multiple values for argument 'x'"
with pytest.raises(selenium.JavascriptException, match=msg):
selenium.run_js("f.callKwargs(76, {x : 6})")
+
+
+def test_pyproxy_name_clash(selenium):
+ selenium.run_js(
+ """
+ let d = pyodide.runPython("{'a' : 2}");
+ assert(() => d.get('a') === 2);
+ assert(() => d.$get('b', 3) === 3);
+
+ let t = pyodide.runPython(`
+ class Test:
+ def destroy(self):
+ return 7
+ Test()
+ `);
+ assert(() => t.$destroy() === 7);
+ t.destroy();
+ assertThrows(() => t.$destroy, "Error", "Object has already been destroyed");
+ """
+ )
| Use leading $ to distinguish shadowed PyProxy methods
I am thinking again of the problem of a Python class with a method that gets shadowed by a PyProxy method. For example:
```js
let t = pyodide.runPython(`
class Test:
def destroy(self):
print("destroy!")
Test()
`);
t.destroy(); // should this destroy the PyProxy or call `Test.destroy`?
```
`$` is legal in Javascript identifiers but not in Python identifiers. So I think we should use a trailing `$` for one of these and no trailing `$` for the other:
```js
t.destroy$(); // prints "destroy!"
t.destroy(); // destroys proxy
```
Or maybe the reverse? Or maybe:
```js
t.destroy$py(); // prints "destroy!"
t.destroy$js(); // destroys proxy
```
not sure what the best option here is, but I think using `$` is a sound approach.
| For completeness sake, some more options would be
- Prefixing every PyProxy method with a `$`, like `t.$destroy()` or `t.$clone()`
- Using symbols like `t[PyProxy.destroy]()`
- Using static methods `PyProxy.destroy(t)`, which could use symbols under the hood
Thanks for opening this issue. I think I would have a preference for `t.$destroy` (or to a lesser extent `t.destroy$`).
> Or maybe the reverse?
One one side it would be more logical to do,
- all class methods without $
- PyProxy methods with $
however that would break backward compatibility and result in code with lots of $ which can be confusing visually.
So maybe let's go with, current methods as is + `$` methods to access shadowed class methods, as you proposed.
From the point of view of the type definitions, the Typescript type checker will have a very hard time if we use the Python object when there is a name collision. For this reason, I think the best bet is for e.g., `destroy` to always refer to the method that destroys the object and `$destroy` to refer to a Python field named `destroy`.
Otherwise, either the Typescript types will be wrong or unacceptably complicated. For example typescript will think that `destroy` is a method `(void) => void` and get upset when you try to pass it parameters even though the Python function being called may well expects parameters. You could make a bunch special of types "PyProxy but the destroy method has been shadowed" but it will be a nightmare to write and to use. On the other hand, if `destroy` always means the method that deletes the proxy, everything works great.
Incidentally, I don't know how to do this for `JsProxy` -- as far as I know there is no character that is valid in Python identifiers but not in Javascript ones. Python also doesn't have anything like `Symbol`s. This should probably be a separate issue though. | 2021-05-23T05:33:50 |
pyodide/pyodide | 1,615 | pyodide__pyodide-1615 | [
"1614"
] | 2e0475227255166269d910f260d138274c02d5fc | diff --git a/packages/micropip/micropip/micropip.py b/packages/micropip/micropip/micropip.py
--- a/packages/micropip/micropip/micropip.py
+++ b/packages/micropip/micropip/micropip.py
@@ -186,10 +186,12 @@ async def add_requirement(self, requirement: str, ctx, transaction):
if requirement.endswith(".whl"):
# custom download location
name, wheel, version = _parse_wheel_url(requirement)
+ name = name.lower()
transaction["wheels"].append((name, wheel, version))
return
req = Requirement(requirement)
+ req.name = req.name.lower()
# If there's a Pyodide package that matches the version constraint, use
# the Pyodide package instead of the one on PyPI
diff --git a/pyodide-build/pyodide_build/buildall.py b/pyodide-build/pyodide_build/buildall.py
--- a/pyodide-build/pyodide_build/buildall.py
+++ b/pyodide-build/pyodide_build/buildall.py
@@ -253,6 +253,7 @@ def build_packages(packages_dir: Path, outputdir: Path, args) -> None:
"import_name_to_package_name": {},
"shared_library": {},
"versions": {},
+ "orig_case": {},
}
libraries = [pkg.name for pkg in pkg_map.values() if pkg.library]
@@ -261,13 +262,14 @@ def build_packages(packages_dir: Path, outputdir: Path, args) -> None:
if pkg.library:
continue
if pkg.shared_library:
- package_data["shared_library"][name] = True
- package_data["dependencies"][name] = [
- x for x in pkg.dependencies if x not in libraries
+ package_data["shared_library"][name.lower()] = True
+ package_data["dependencies"][name.lower()] = [
+ x.lower() for x in pkg.dependencies if x not in libraries
]
- package_data["versions"][name] = pkg.version
+ package_data["versions"][name.lower()] = pkg.version
for imp in pkg.meta.get("test", {}).get("imports", [name]):
- package_data["import_name_to_package_name"][imp] = name
+ package_data["import_name_to_package_name"][imp] = name.lower()
+ package_data["orig_case"][name.lower()] = name
# Hack for 0.17.0 release
# TODO: FIXME!!
| diff --git a/packages/micropip/test_micropip.py b/packages/micropip/test_micropip.py
--- a/packages/micropip/test_micropip.py
+++ b/packages/micropip/test_micropip.py
@@ -228,3 +228,18 @@ def test_install_different_version2(selenium_standalone_micropip):
`);
"""
)
+
+
[email protected]("jinja2", ["jinja2", "Jinja2"])
+def test_install_mixed_case2(selenium_standalone_micropip, jinja2):
+ selenium = selenium_standalone_micropip
+ selenium.run_js(
+ f"""
+ await pyodide.loadPackage("micropip");
+ await pyodide.runPythonAsync(`
+ import micropip
+ await micropip.install("{jinja2}")
+ import jinja2
+ `)
+ """
+ )
diff --git a/src/tests/test_package_loading.py b/src/tests/test_package_loading.py
--- a/src/tests/test_package_loading.py
+++ b/src/tests/test_package_loading.py
@@ -181,3 +181,16 @@ def test_js_load_package_from_python(selenium_standalone):
selenium.run(f"import js ; js.pyodide.loadPackage(['{to_load}'])")
assert f"Loading {to_load}" in selenium.logs
assert selenium.run_js("return Object.keys(pyodide.loadedPackages)") == [to_load]
+
+
[email protected]("jinja2", ["jinja2", "Jinja2"])
+def test_load_package_mixed_case(selenium_standalone, jinja2):
+ selenium = selenium_standalone
+ selenium.run_js(
+ f"""
+ await pyodide.loadPackage("{jinja2}");
+ pyodide.runPython(`
+ import jinja2
+ `)
+ """
+ )
| micropip.install of builtin packages should be case insensitive
PyPI distribution names are case insensitive per [PEP-0426](https://www.python.org/dev/peps/pep-0426/#name). However micropip seems to be case sensitive when it comes to distributions in builtin_packages. This is problematic when installing a distribution that depends on one of the builtin_packages but refers to it with different casing, since micropip will instead try to install the dependency from PyPI.
To workaround you can set the name with the casing used, e.g.:
```python
import micropip
micropip.PACKAGE_MANAGER.builtin_packages['jinja2'] = micropip.PACKAGE_MANAGER.builtin_packages['Jinja2']
```
I came across this wanting to install the altair charting library in jupyterlite. While display is not working yet I was able to install altair using:
```python
import micropip
await micropip.install('Jinja2')
micropip.PACKAGE_MANAGER.builtin_packages['jinja2'] = micropip.PACKAGE_MANAGER.builtin_packages['Jinja2']
# Last jsonschema version before pyrsistent dependency added (native, not wheel)
await micropip.install("https://files.pythonhosted.org/packages/77/de/47e35a97b2b05c2fadbec67d44cfcdcd09b8086951b331d82de90d2912da/jsonschema-2.6.0-py2.py3-none-any.whl")
await micropip.install("altair")
import sys
sys.setrecursionlimit(255)
```
| 2021-05-30T00:54:13 |
|
pyodide/pyodide | 1,698 | pyodide__pyodide-1698 | [
"1541"
] | 601e474e7dd294c990f898367e362ae5457e08a9 | diff --git a/benchmark/stack_usage.py b/benchmark/stack_usage.py
new file mode 100644
--- /dev/null
+++ b/benchmark/stack_usage.py
@@ -0,0 +1,78 @@
+import pytest
+
+
[email protected](scope="session")
+def print_info():
+ headings = [
+ "browser",
+ "py_usage",
+ "js_depth",
+ "py_depth",
+ "js_depth/py_usage",
+ "js_depth/py_depth",
+ ]
+ fmt = "## {{:{:d}s}} {{:{:d}.2f}} {{:{:d}g}} {{:{:d}g}} {{:{:d}g}} {{:{:d}g}}".format(
+ *map(len, headings)
+ )
+ printed_heading = False
+
+ def print_info(*args):
+ nonlocal printed_heading
+ if not printed_heading:
+ printed_heading = True
+ print("## " + " ".join(headings))
+ print(fmt.format(*args))
+
+ yield print_info
+
+
[email protected]_refcount_check
[email protected]_pyproxy_check
+def test_stack_usage(selenium, print_info):
+ res = selenium.run_js(
+ """
+ window.measure_available_js_stack_depth = () => {
+ let depth = 0;
+ function recurse() { depth += 1; recurse(); }
+ try { recurse(); } catch (err) { }
+ return depth;
+ };
+ let py_usage = pyodide.runPython(`
+ from js import measure_available_js_stack_depth
+ def recurse(n):
+ return measure_available_js_stack_depth() if n==0 else recurse(n-1)
+ (recurse(0)-recurse(100))/100
+ `);
+ let js_depth = measure_available_js_stack_depth();
+ window.py_depth = [0];
+ try {
+ pyodide.runPython(`
+ import sys
+ from js import py_depth
+ sys.setrecursionlimit(2000)
+ def infiniterecurse():
+ py_depth[0] += 1
+ infiniterecurse()
+ infiniterecurse()
+ `);
+ } catch {}
+
+ py_depth = py_depth[0];
+ return [
+ py_usage,
+ js_depth,
+ py_depth,
+ Math.floor(js_depth/py_usage),
+ Math.floor(js_depth/py_depth),
+ ]
+ """
+ )
+ # "py_usage",
+ # "js_depth",
+ # "py_depth",
+ # "js_depth/py_usage",
+ # "js_depth/py_depth",
+
+ print_info(selenium.browser, *res)
+
+ selenium.clean_logs()
| CPython stack frame size estimation
The JavaScript stack capacity is limited in web browsers. In turn, this imposes a limit on the maximum CPython stack size and maximum recursion depth in a python program.
Some python applications hit the stack limit. One example of such python module is `jedi` (an autocompletion, static analysis and refactoring library).
Although we cannot do anything about the JavaScript stack limit in web browsers, we noticed that the stack frame size of CPython can be optimized. By reducing the stack frame size, it is possible to achieve higher recursion depth.
We observe that the stack frame is approx 18% larger in pyodide version 0.17.0 (as of rev. c327405), compared to pyodide versions 0.14.1 and 0.15.0.
However, we see an improvement in this aspect when comparing version 0.17.0 to version 0.16.1.
We use this script to approximate the CPython stack frame size:
```javascript
loadPyodide()
measure_available_js_stack_depth = () => {
let depth = 0;
function recurse() { depth += 1; recurse(); }
try { recurse(); } catch (err) { }
return depth;
}; pyodide.runPython("import js\ndef recurse(n):\n return js.measure_available_js_stack_depth() if n==0 else recurse(n-1)\n(recurse(0)-recurse(100))/100")
```
The code snippet above is to be pasted to the JavaScript console in the `build/test.html` page. The script outputs a single number, which is the relative stack frame size of CPython. The smaller the better.
The pyodide team [has shown an interest](https://github.com/pyodide/pyodide/issues/1331#issuecomment-823483383) to integrate stack frame size estimation into CI to hunt down regressions.
This issue was created to facilitate any discussion on that topic.
| Great! Would you be willing to open a PR adding this script to the CI? I think the best way is to copy the existing test-core-chrome and test-core-firefox jobs and swap out the command.
That's very useful thanks!
> I think the best way is to copy the existing test-core-chrome and test-core-firefox jobs and swap out the command.
Maybe rather add it as part of benchmarks? We already have things like load time there, and the generated .json artifact would allow to follow the evolution with time.
Would someone be interested in opening a question in https://discuss.python.org/ (section users) and asking if there is anything else we can do to build CPython for an environment with a small stack size?
I'm open to compiling with `-Os` (mentioned earlier) if the performance impact is not too large.
> I'm open to compiling with `-Os` (mentioned earlier) if the performance impact is not too large.
This is our patch that enables size optimizations and reduces the relative CPython stack frame size from `47.33` to `41.34` (tested on revision 958724101a9cd6a1181feadcec811f26153ff828):
```diff
diff --git a/cpython/Makefile b/cpython/Makefile
index 3c0220b..411e9cc 100644
--- a/cpython/Makefile
+++ b/cpython/Makefile
@@ -111,6 +111,7 @@ $(BUILD)/Makefile: $(BUILD)/.patched $(ZLIBBUILD)/.configured $(SQLITEBUILD)/lib
cd $(BUILD); \
CONFIG_SITE=./config.site READELF=true emconfigure \
./configure \
+ OPT="-fwrapv -Os -Wall" \
CFLAGS="-fPIC -sEMULATE_FUNCTION_POINTER_CASTS=1" \
CPPFLAGS="-I$(SQLITEBUILD) -I$(BZIP2BUILD) -I$(ZLIBBUILD)" \
--without-pymalloc \
```
A welcomed side-effect of this change is reduced size of `pyodide.asm.wasm` from 10212kB to 9316kB.
It would be great if the size optimization were set by default for CPython. We haven't evaluated the performance impacts, though.
Could the optimization flag be passed as an argument to the top-level `Makefile`? This way, it would be easy to do custom pyodide builds fitted for a particular use-case.
> Could the optimization flag be passed as an argument to the top-level `Makefile`?
+1, a PR doing this would be very welcome.
> It would be great if the size optimization were set by default for CPython. We haven't evaluated the performance impacts, though.
Yeah, that's consistent with my earlier experiments in https://github.com/pyodide/pyodide/issues/1365#issuecomment-807364368
> Could the optimization flag be passed as an argument to the top-level Makefile?
I think it should have used https://github.com/pyodide/pyodide/blob/ffa99153cc4ccb233ad11d3cd58108f06707d01f/Makefile.envs#L28
and we could change that to `Os` (or `Oz`).
And yes ideally we should move all of such global config settings to `Makefile.envs` than optionally use the corresponding env variable if it's definied as in https://github.com/pyodide/pyodide/blob/ffa99153cc4ccb233ad11d3cd58108f06707d01f/Makefile.envs#L18-L19
A single stack frame for a simple Python function call looks like this:
```js
at pyodide.asm.wasm:0x1f08e1
at _PyFunction_Vectorcall (pyodide.asm.wasm:0x1f06b2)
at byn$fpcast-emu$_PyFunction_Vectorcall (pyodide.asm.wasm:0x7eec18)
at pyodide.asm.wasm:0x2d9f0d
at _PyEval_EvalFrameDefault (pyodide.asm.wasm:0x2d7383)
at byn$fpcast-emu$_PyEval_EvalFrameDefault
```
So the first thing of note is that the `byn$fpcast-emu` calls are HUGE stack hogs: they take 63 64 bit integers as their argument. I wonder if we could change the fpcast system to pass instead a giant struct. That struct would then be allocated in the "argument stack" and wouldn't contribute to call stack overflows -- I think we have a lot more argument stack space if we need it, and we can control how much argument stack space we allocate for ourselves.
Another option is we could try to disable fpcast emulation for these particular functions.
Another thing we could do is make `_PyEval_EvalFrame` directly call `_PyEval_EvalFrameDefault`, short circuiting the `tstate->interp->eval_frame` mechanism. Not likely anyone can run a JIT in Pyodide anyways. That would drop one `byn$fpcat-emu` frame and one smaller stack frame.
This stack usage measurement doesn't seem to work in firefox. | 2021-07-09T20:20:13 |
|
pyodide/pyodide | 1,742 | pyodide__pyodide-1742 | [
"1529"
] | 684269285b5341c72e5a95e020c9884360e0faab | diff --git a/src/py/_pyodide/_core_docs.py b/src/py/_pyodide/_core_docs.py
--- a/src/py/_pyodide/_core_docs.py
+++ b/src/py/_pyodide/_core_docs.py
@@ -1,6 +1,6 @@
# type: ignore
-from typing import Any, Callable
+from typing import Any, Callable, Iterable
# All docstrings for public `core` APIs should be extracted from here. We use
# the utilities in `docstring.py` and `docstring.c` to format them
@@ -130,7 +130,8 @@ def to_js(
*,
depth: int = -1,
pyproxies: JsProxy = None,
- create_pyproxies: bool = True
+ create_pyproxies: bool = True,
+ dict_converter: Callable[[Iterable[JsProxy]], JsProxy] = None,
) -> JsProxy:
"""Convert the object to Javascript.
@@ -160,6 +161,15 @@ def to_js(
create_pyproxies: bool, default=True
If you set this to False, :any:`to_js` will raise an error
+ dict_converter: Callable[[Iterable[JsProxy]], JsProxy], defauilt = None
+ This converter if provided recieves a (Javascript) iterable of
+ (Javascript) pairs [key, value]. It is expected to return the
+ desired result of the dict conversion. Some suggested values for
+ this argument:
+
+ js.Map.new -- similar to the default behavior
+ js.Array.from -- convert to an array of entries
+ js.Object.fromEntries -- convert to a Javascript object
"""
return obj
| diff --git a/src/tests/test_typeconversions.py b/src/tests/test_typeconversions.py
--- a/src/tests/test_typeconversions.py
+++ b/src/tests/test_typeconversions.py
@@ -403,6 +403,63 @@ class Test: pass
)
+def test_dict_converter(selenium):
+ assert (
+ selenium.run_js(
+ """
+ self.arrayFrom = Array.from;
+ return pyodide.runPython(`
+ from js import arrayFrom
+ from pyodide import to_js
+ res = to_js({ x : x + 2 for x in range(5)}, dict_converter=arrayFrom)
+ res
+ `)
+ """
+ )
+ == [[0, 2], [1, 3], [2, 4], [3, 5], [4, 6]]
+ )
+
+ assert (
+ selenium.run_js(
+ """
+ let px = pyodide.runPython("{ x : x + 2 for x in range(5)}");
+ let result = px.toJs({dict_converter : Array.from});
+ px.destroy();
+ return result;
+ """
+ )
+ == [[0, 2], [1, 3], [2, 4], [3, 5], [4, 6]]
+ )
+
+ assert (
+ selenium.run_js(
+ """
+ return pyodide.runPython(`
+ from js import Object
+ from pyodide import to_js
+ res = to_js({ x : x + 2 for x in range(5)}, dict_converter=Object.fromEntries)
+ res
+ `);
+ """
+ )
+ == {"0": 2, "1": 3, "2": 4, "3": 5, "4": 6}
+ )
+
+ assert (
+ selenium.run_js(
+ """
+ let px = pyodide.runPython("{ x : x + 2 for x in range(5)}");
+ let result = px.toJs({dict_converter : Object.fromEntries});
+ px.destroy();
+ return result;
+ """
+ )
+ == {"0": 2, "1": 3, "2": 4, "3": 5, "4": 6}
+ )
+
+ selenium.run("del res; del arrayFrom; del Object")
+
+
def test_python2js_long_ints(selenium):
assert selenium.run("2**30") == 2 ** 30
assert selenium.run("2**31") == 2 ** 31
| Passing python dict into a js function
Hi, I just noticed that we now (v0.17.0) convert python dict to a `Map` in js, and it break my previous code.
I have 3 questions:
1. What would be the easiest way to send a dictionary in Python to Js so it will become a regular object.
2. What is the reason to choose converting `dict` to `Map`? I find it's bit inconvenient if we get a Map instead of an object by default. I mean, `Object`/`{}` is obviously more commonly used than `Map` in JS, wouldn't it be a better default type for converting `dict` in Python?
3. Previously, if we import js function (e.g. `postMessage`) into Python, then send a python dictionary( `js.postMessage({'data': 123})`, it automatically convert the arguments into js object, but now it complains that `JsException: DataCloneError: Failed to execute 'postMessage' on 'DedicatedWorkerGlobalScope': [object Object] could not be cloned.`.
| Hi @oeway, thanks for the question. We didn't have time to vet this as well as I would have liked and so there are some usability issues. Ideally it would be good to have a careful discussion about each detail like this, but since we have such limited developer time available on this project, there's inevitably going to be things that we don't consider carefully enough.
Though also this could have been discussed before the release if we'd made a beta release and given people a week or two with it. The four days we allowed for that was obviously not enough.
3. You can do `js.postMessage(**{'data': 123})` or `js.postMessage(data=123)`. Also, it's okay to do `from js import postMessage` and use `postMessage` directly now if you prefer.
.
1. If the object is the last argument:
```py
jsfunc(arg1, arg2, **my_dict)
```
If the object is not the last argument:
```py
from js import Object
from pyodide import to_js
jsfunc(Object.from_entries(to_js(my_dict)), arg2)
```
2. There are several reasons pointing in each direction. The main reason I chose `Map` is that Objects only can have strings as keys, whereas `Map` can have anything. (This isn't as beneficial as I thought though, because only strings or numbers are looked up by value, everything else keys on object identity.) Going from only Strings to Strings or Numbers isn't necessarily enough of a benefit to make it worth the annoyance.
Also, on a PyProxy of a dict you would look up a key with `proxy.get(key)`. If we convert it to a map, then `proxy.toJs().get(key)` does the same thing. It seemed nice to keep the `getattr`/`getitem` distinction if possible.
Also, `pyodide.toPy(my_dict.toJs())` will give back a dict equal to the original one this way, whereas with an object, `{ 1 : 2}` will turn into `{"1" : 2}`.
But at the moment I am thinking it might have been a better idea to keep the `Object`.
Ideally we should make it so that `toJs` gives people some choice about how things are converted.
Hi @hoodmane Thanks for you prompt answer! Sure I understand that, and the team have been making tremendous progress!
Regarding 1 and 3, if I have nested dictionaries, that won't work, right? Would it make sense to restore the behaviour that automatically convert nested dictionaries automatically into nested objects when calling js functions?
For 2, I think it might worth more discussion on usability vs accuracy here. What you described all make sense, but it does reduce the usability.
Let me also listed a few cases where Object is more practical rather than Map:
For the following python code:
```python
map = {"key1": "value1", "key2": "value2"}
print(map["key1"])
```
Using Map would be like this:
```js
let map = new Map([["key1", "value1"], ["key2", "value2"]])
console.log(map.get("key1"))
```
But if we use object in js, the same python code can essentially work with little change:
```js
let map = {"key1": "value1", "key2": "value2"}
console.log(map["key1"])
```
Another thing is that, if we do JSON serialisation on Map, it won't work directly:
```js
let map = new Map([["key1", "value1"], ["key2", "value2"]])
JSON.stringify(map) // returns "{}"
```
> Also, pyodide.toPy(my_dict.toJs()) will give back a dict equal to the original one this way, whereas with an object, { 1 : 2} will turn into {"1" : 2}.
I think it's not exactly {"1": 2}, because in JS you can also use number as index for object:
```js
let a=1, b={}
b[a] = 2
b[1] === 2 // true
b["1"] === 2 // true
```
This will then alleviate the conversion issue? This means, it only becomes an issue if we have both `1` and `"1"` as key in a python dictionary and we convert to JS object they cannot co-exist. However, I would say, this is very rare in practice, most people will just use string as key and if they want to use number they will not mix with string keys, or they might just go for list/Array -- it will be very confusing if they start to mix number-string and numbers as key in the same dictionary.
What do you think?
> I have nested dictionaries, that won't work, right?
Right.
> it might worth more discussion on usability vs accuracy here.
Underlying problem is that I am a mathematician and sometimes I suffer from obsessive accuracy. It's important to get feedback to check that occasionally.
> Another thing is that, if we do JSON serialisation on Map, it won't work directly:
=(
> it will be very confusing if they start to mix number-string and numbers as key in the same dictionary.
I agree that this use case is quite rare. In particular, the vast majority of the time dict keys will have only one type. In the rare case that keys are mixed types, they will very very likely be mixed between a bunch of different types and we can only handle strings, numbers, or one extra type.
One extra issue with `Object` is what if the dict has keys that overlap with `Object.prototype`? Options:
1. Throw error
2. Use `Object.create(null)`.
What do you think about `Object.create(null)`?
Anyways you have convinced me that this needs to be changed. In fact, I was 90% convinced when I realized that `Object.toJs` wouldn't work directly with `fetch`:
https://pyodide.org/en/stable/usage/faq.html#how-can-i-use-fetch-with-optional-arguments-from-python
This should have been fixed in a beta release though. As I said above, next time we need to make a beta release and ensure that people have a more reasonable amount of time with it.
> This means, it only becomes an issue if we have both 1 and "1" as key in a python dictionary and we convert to JS object they cannot co-exist.
Wasn't the issue also that if you have a `Dict[int, Any]` if you do a round trip conversion to JS you would end up with `Dict[str, Any]` ? Or I guess that only would be an issue with an explicit conversion.
> Anyways you have convinced me that this needs to be changed
We could consider it a bug fix for 0.17.1 if necessary, but it would be good to get more user feedback from existing users as well.
> if you have a `Dict[int, Any]` if you do a round trip conversion to JS you would end up with `Dict[str, Any]`
Yeah that was the issue. This is alleviated by proxying dicts, but even with explicit conversions it's still super undesirable.
> a bug fix for 0.17.1
So the options are:
* Keep current behavior but add variations or more flexibility to toJs to make object case more ergonomic
* Change default to `Object`, add more flexibility to allow `Map` when desired.
* Use `Object.create(null)` (cleaner than `Object` because there are no `Object.prototype` name clashes but some some things are not expecting it and break weirdly).
If we want feedback on this, maybe we should set up a thread about this with a poll, then tweet it out?
Sounds good. Maybe we could indeed open a RFC issue with these 3 options, examples and advantages/disadvantages of each, then advertise it a bit? A poll could be a good idea, but my concern is that it's also easy to vote without understanding all the technical implications, and this has a number of those.
> open a RFC issue
Sounds good. Make also make a way for people to subscribe to RFCs, then we can tweet / @ a bunch of people suggesting they might want to subscribe. The approach I used in #1331 certainly works for an RFC in a pinch, but it'd be good to give people the option to opt into the RFC rather than having me just look through threads for people who seem like they'd want to know.
Having the same issues as posted by @oeway. Been reviving a project which used 0.15.0, and upgrading to 0.17.0 broke it for the same reasons mentioned above. I'm keen to help with contribution if help is needed, but I might need a bit of hand holding initially, as my raw C experience is limited mainly to past university projects.
@ibdafna Well we still haven't quite decided what should be done. I'm wondering what an API that allows a choice of this behavior would look like. Maybe we could just take a flag `toJs({ dict_to_object : true })`? Anyways I think our plan is still to try to get more input on this.
If you have an idea of what the logic you would like is, it'd be great if you would open a PR but just be aware that we may not be able to merge if we decide on a different design.
I would be happy to answer any questions you have about how the code works.
The relevant function is here:
https://github.com/pyodide/pyodide/blob/2a906bd48969ca373d8421949b7b10b1f1ccb43a/src/core/python2js.c#L141
The functions `JsMap_New` and `JsMap_Set` are here:
https://github.com/pyodide/pyodide/blob/2a906bd48969ca373d8421949b7b10b1f1ccb43a/src/core/hiwire.c#L746
It's possible we could revert the behavior of this as 0.17.1 and possibly make a more flexible API in the future.
I bumped into this recently, and it was annoying, but it's not too hard to convert a nested map into an object.
```js
const toObject = (x) => {
if (x instanceof Map) {
return Object.fromEntries(Array.from(
x.entries(),
([k, v]) => [k, toObject(v)]
))
} else if (x instanceof Array) {
return x.map(toObject);
} else {
return x;
}
}
```
And from Python you could do e.g.,
```py
from js import fetch, toObject # if toObject is a global
from pyodide import to_js
fetch(url, toObject(to_js(options_dict)))
```
I think this is okay for now and we can plan to make a more convenient solution in v0.18.
I would prefer `fetch(url, options_dict)` ;)
Looking forward to v0.18!
Unfortunately we can't support `fetch(url, options_dict)`.
Would love to see dict -> object conversion being added in the next release!
> If the object is not the last argument:
>
> ```python
> from js import Object
> from pyodide import to_js
> jsfunc(Object.from_entries(to_js(my_dict)), arg2)
> ```
>
I was a bit confused since I couldn't make this code work, turns out the correct call should be:
```python
jsfunc(Object.fromEntries(to_js(my_dict)), arg2)
``` | 2021-07-23T16:47:45 |
pyodide/pyodide | 1,819 | pyodide__pyodide-1819 | [
"616"
] | 69a9fc760d393fca9219ca9caecd097d6b9c4833 | diff --git a/src/py/_pyodide/_base.py b/src/py/_pyodide/_base.py
--- a/src/py/_pyodide/_base.py
+++ b/src/py/_pyodide/_base.py
@@ -515,7 +515,8 @@ def find_imports(source: str) -> List[str]:
Returns
-------
``List[str]``
- A list of module names that are imported in ``source``.
+ A list of module names that are imported in ``source``. If ``source`` is not
+ syntactically correct Python code (after dedenting), returns an empty list.
Examples
--------
@@ -527,7 +528,10 @@ def find_imports(source: str) -> List[str]:
# handle mis-indented input from multi-line strings
source = dedent(source)
- mod = ast.parse(source)
+ try:
+ mod = ast.parse(source)
+ except SyntaxError:
+ return []
imports = set()
for node in ast.walk(mod):
if isinstance(node, ast.Import):
| diff --git a/src/tests/test_pyodide.py b/src/tests/test_pyodide.py
--- a/src/tests/test_pyodide.py
+++ b/src/tests/test_pyodide.py
@@ -11,16 +11,26 @@
def test_find_imports():
res = find_imports(
- dedent(
- """
- import numpy as np
- from scipy import sparse
- import matplotlib.pyplot as plt
- """
- )
+ """
+ import numpy as np
+ from scipy import sparse
+ import matplotlib.pyplot as plt
+ """
)
assert set(res) == {"numpy", "scipy", "matplotlib"}
+ # If there is a syntax error in the code, find_imports should return empty
+ # list.
+ res = find_imports(
+ """
+ import numpy as np
+ from scipy import sparse
+ import matplotlib.pyplot as plt
+ for x in [1,2,3]
+ """
+ )
+ assert res == []
+
def test_code_runner():
assert should_quiet("1+1;")
| RunPythonAsync catch not working "Uncaught (in promise) Error"
Hi, I'm currently trying the Pyodide library inside a WebWorker, in a React project.
So my worker, will run the python code (which the user enter) and show the output even if there's an error.
Here is my code :
```js
// myWorker.js
const data = {
python: 'impt numpy as np'
}
languagePluginLoader.then(() => {
pyodide
.runPythonAsync(data.python, () => {})
.then(results => {
console.log(results);
self.postMessage({ output: results });
})
.catch(err => {
console.log(err);
self.postMessage({ error: err.message });
});
});
```
The problem here is the catch of `runPythonAsync(...).catch(err => {})` won't catch this kind of error thrown by `pyodide.asm.js` file, we'll only see it in the console..

```
Uncaught (in promise) Error: Traceback (most recent call last):
File "/lib/python3.7/site-packages/pyodide.py", line 58, in find_imports
mod = ast.parse(code)
File "/lib/python3.7/ast.py", line 35, in parse
return compile(source, filename, mode, PyCF_ONLY_AST)
File "<unknown>", line 1
impt numpy as np
^
SyntaxError: invalid syntax
at _hiwire_throw_error (http://localhost:3000/static/modules/pyodide.asm.js:8:38727)
at wasm-function[358]:0xfc6c0
at wasm-function[386]:0xfe474
at Object.Module.__findImports (http://localhost:3000/static/modules/pyodide.asm.js:8:1989949)
at Object.Module.runPythonAsync (http://localhost:3000/static/modules/pyodide.asm.js:8:43347)
at http://localhost:3000/443e11adcc88e32bff24.worker.js:106:8
```
So how I can handle this type of Python error ? Cause right now the catch function able me to catch error when something goes wrong in the `then(()=> {})` function..
Thanks.
| Wrap the call in a try...catch block, and await the result inside the block
`try {await pyodide.runPythonAsync('imprt numpy')} catch (e) {console.log(e)}`
I manage to make it work without async/await method.
```js
/* eslint-disable */
import languagePluginLoader from "./pyodide.js";
self.addEventListener("message", e => {
languagePluginLoader.then(() => {
const data = {
python: 'impt numpy as np'
}
pyodide
.runPythonAsync(data.python, () => {})
.then(results => {
self.postMessage({
output: results,
target: data.uniqueUrlWithId
});
})
.catch(err => { // This catch won't be triggered, --this is the first catch--
console.log(err);
});
}) // end of languagePluginLoader.then()
.catch((error) => { // This one will be trigger but why ?
console.log(error);
});
});
```
Does the first catch will only be triggered if there's an error in the then function of `runPythonAsync()` ?
Hi, i face the same issue, use follow code.
```ts
try {
this.pyodide.runPythonAsync(
data.python,
(messageProgress: any) => {
console.error('messageProgress', messageProgress); // <====== this not be call, dont known why
},
(error: any) => {
// TODO cannot catch the error, it a bug
console.error('runPythonAsync error', error);
workerSelf.postMessage({error: error} as ResultType); // <====== this not be call, dont known why
},
)
.then((results: any) => {
const stdout = this.pyodide.runPython('sys.stdout.getvalue()');
workerSelf.postMessage({
results: results,
print: {msg: stdout},
total: true,
} as ResultType);
}).catch((err: any) => {
// if you prefer messages with the error
workerSelf.postMessage({error: err.message} as ResultType);
// if you prefer onerror events
// setTimeout(() => { throw err; });
}).finally(() => {
workerSelf.postMessage({allEnd: true} as ResultType); // <====== this not work
});
} catch (error) {
console.error('runPythonAsync error', error); // <====== BUT this work
workerSelf.postMessage({error: error} as ResultType);
}
```
seems like the `pyodide.runPythonAsync` will throw a `Error` when it face `SyntaxError: invalid syntax` , but in js style , it must Reject the Promise, not throw a error.
i can sure it's a BUG .
---
BTW: the api document about `runPythonAsync` seems wrong.
https://pyodide.readthedocs.io/en/latest/js-api/pyodide_runPythonAsync.html#js-api-pyodide-runpythonasync
I just got bitten by a similar problem which I'm fixing in https://github.com/alexmojaki/futurecoder/pull/176
I don't think the original report applies any more since runPythonAsync no longer uses loadPackagesFromImports automatically, but I think it would still be best if `find_imports` caught SyntaxError and returned an empty list. I don't think users of these functions should have to check for valid syntax beforehand - it's not only inconvenient but easy to forget, assuming you even know it's necessary in the first place (I didn't). In fact I checked my original code with two maintainers here and it didn't seem to occur to them either. | 2021-09-07T13:53:32 |
pyodide/pyodide | 1,822 | pyodide__pyodide-1822 | [
"1792"
] | 1010c0e3bfd0a6d1c3f9cac4093ff7a358ffaca2 | diff --git a/src/py/_pyodide/console.py b/src/py/_pyodide/console.py
--- a/src/py/_pyodide/console.py
+++ b/src/py/_pyodide/console.py
@@ -60,6 +60,9 @@ def write(self, text):
def flush(self):
pass
+ def isatty(self) -> bool:
+ return True
+
class _ReadStream:
"""A utility class so we can specify our own handler for reading from stdin"""
@@ -74,6 +77,9 @@ def readline(self, n=-1):
def flush(self):
pass
+ def isatty(self) -> bool:
+ return True
+
class _Compile(Compile):
"""Compile code with CodeRunner, and remember future imports
@@ -200,15 +206,15 @@ class Console:
globals : ``dict``
The global namespace in which to evaluate the code. Defaults to a new empty dictionary.
+ stdin_callback : ``Callable[[str], None]``
+ Function to call at each read from ``sys.stdin``. Defaults to ``None``.
+
stdout_callback : ``Callable[[str], None]``
Function to call at each write to ``sys.stdout``. Defaults to ``None``.
stderr_callback : ``Callable[[str], None]``
Function to call at each write to ``sys.stderr``. Defaults to ``None``.
- stdin_callback : ``Callable[[str], None]``
- Function to call at each read from ``sys.stdin``. Defaults to ``None``.
-
persistent_stream_redirection : ``bool``
Should redirection of standard streams be kept between calls to :any:`runcode <Console.runcode>`?
Defaults to ``False``.
@@ -221,15 +227,15 @@ class Console:
globals : ``Dict[str, Any]``
The namespace used as the global
+ stdin_callback : ``Callback[[str], None]``
+ Function to call at each read from ``sys.stdin``.
+
stdout_callback : ``Callback[[str], None]``
Function to call at each write to ``sys.stdout``.
stderr_callback : ``Callback[[str], None]``
Function to call at each write to ``sys.stderr``.
- stdin_callback : ``Callback[[str], None]``
- Function to call at each read from ``sys.stdin``.
-
buffer : ``List[str]``
The list of strings that have been :any:`pushed <Console.push>` to the console.
@@ -241,9 +247,9 @@ def __init__(
self,
globals: Optional[dict] = None,
*,
+ stdin_callback: Optional[Callable[[str], None]] = None,
stdout_callback: Optional[Callable[[str], None]] = None,
stderr_callback: Optional[Callable[[str], None]] = None,
- stdin_callback: Optional[Callable[[str], None]] = None,
persistent_stream_redirection: bool = False,
filename: str = "<console>",
):
@@ -252,9 +258,9 @@ def __init__(
self.globals = globals
self._stdout = None
self._stderr = None
+ self.stdin_callback = stdin_callback
self.stdout_callback = stdout_callback
self.stderr_callback = stderr_callback
- self.stdin_callback = stdin_callback
self.filename = filename
self.buffer: List[str] = []
self._lock = asyncio.Lock()
@@ -297,22 +303,18 @@ def _stdstreams_redirections_inner(self):
yield
return
redirects = []
+ if self.stdin_callback:
+ stdin_name = getattr(sys.stdin, "name", "<stdin>")
+ stdin_stream = _ReadStream(self.stdin_callback, name=stdin_name)
+ redirects.append(redirect_stdin(stdin_stream))
if self.stdout_callback:
- redirects.append(
- redirect_stdout(
- _WriteStream(self.stdout_callback, name=sys.stdout.name)
- )
- )
+ stdout_name = getattr(sys.stdout, "name", "<stdout>")
+ stdout_stream = _WriteStream(self.stdout_callback, name=stdout_name)
+ redirects.append(redirect_stdout(stdout_stream))
if self.stderr_callback:
- redirects.append(
- redirect_stderr(
- _WriteStream(self.stderr_callback, name=sys.stderr.name)
- )
- )
- if self.stdin_callback:
- redirects.append(
- redirect_stdin(_ReadStream(self.stdin_callback, name=sys.stdin.name))
- )
+ stderr_name = getattr(sys.stderr, "name", "<stderr>")
+ stderr_stream = _WriteStream(self.stderr_callback, name=stderr_name)
+ redirects.append(redirect_stderr(stderr_stream))
try:
self._streams_redirected = True
with ExitStack() as stack:
| diff --git a/src/tests/test_console.py b/src/tests/test_console.py
--- a/src/tests/test_console.py
+++ b/src/tests/test_console.py
@@ -236,6 +236,9 @@ def test_nonpersistent_redirection(safe_sys_redirections):
my_stdout = ""
my_stderr = ""
+ def stdin_callback():
+ pass
+
def stdout_callback(string):
nonlocal my_stdout
my_stdout += string
@@ -250,6 +253,7 @@ async def get_result(input):
return await res
shell = Console(
+ stdin_callback=stdin_callback,
stdout_callback=stdout_callback,
stderr_callback=stderr_callback,
persistent_stream_redirection=False,
@@ -274,6 +278,10 @@ async def test():
assert await get_result("1+1") == 2
+ assert await get_result("sys.stdin.isatty()")
+ assert await get_result("sys.stdout.isatty()")
+ assert await get_result("sys.stderr.isatty()")
+
asyncio.get_event_loop().run_until_complete(test())
| AttributeError: '_WriteStream' object has no attribute 'isatty'
While trying to run pytest on a package in REPL (for https://github.com/pyodide/pyodide/issues/1791),
```
>>> import pytest
>>> import micropip
>>> await micropip.install('dit-cli')
>>> import dit_cli
>>> pytest.main(['--pyargs', 'dit_cli'])
INTERNALERROR> Traceback (most recent call last):
INTERNALERROR> File "/lib/python3.9/site-packages/_pytest/main.py", line 265, in wrap_session
INTERNALERROR> config._do_configure()
INTERNALERROR> File "/lib/python3.9/site-packages/_pytest/config/__init__.py", line 982, in _do_configure
INTERNALERROR> self.hook.pytest_configure.call_historic(kwargs=dict(config=self))
INTERNALERROR> File "/lib/python3.9/site-packages/pluggy/hooks.py", line 308, in call_historic
INTERNALERROR> res = self._hookexec(self, self.get_hookimpls(), kwargs)
INTERNALERROR> File "/lib/python3.9/site-packages/pluggy/manager.py", line 93, in _hookexec
INTERNALERROR> return self._inner_hookexec(hook, methods, kwargs)
INTERNALERROR> File "/lib/python3.9/site-packages/pluggy/manager.py", line 84, in <lambda>
INTERNALERROR> self._inner_hookexec = lambda hook, methods, kwargs: hook.multicall(
INTERNALERROR> File "/lib/python3.9/site-packages/pluggy/callers.py", line 208, in _multicall
INTERNALERROR> return outcome.get_result()
INTERNALERROR> File "/lib/python3.9/site-packages/pluggy/callers.py", line 80, in get_result
INTERNALERROR> raise ex[1].with_traceback(ex[2])
INTERNALERROR> File "/lib/python3.9/site-packages/pluggy/callers.py", line 187, in _multicall
INTERNALERROR> res = hook_impl.function(*args)
INTERNALERROR> File "/lib/python3.9/site-packages/_pytest/terminal.py", line 228, in pytest_configure
INTERNALERROR> reporter = TerminalReporter(config, sys.stdout)
INTERNALERROR> File "/lib/python3.9/site-packages/_pytest/terminal.py", line 337, in __init__
INTERNALERROR> self.isatty = file.isatty()
INTERNALERROR> AttributeError: '_WriteStream' object has no attribute 'isatty'
```
so we would need to implement that method. As far as I understand from https://docs.python.org/3/library/os.html#os.isatty it should return True for _WriteStream.
| 2021-09-08T14:41:35 |
|
pyodide/pyodide | 1,847 | pyodide__pyodide-1847 | [
"1841"
] | f03db870a25039bd70ac8822e0c3fe8be8ddb3fa | diff --git a/conftest.py b/conftest.py
--- a/conftest.py
+++ b/conftest.py
@@ -226,6 +226,13 @@ def run_js_inner(self, code, check_code):
def get_num_hiwire_keys(self):
return self.run_js("return pyodide._module.hiwire.num_keys();")
+ @property
+ def force_test_fail(self) -> bool:
+ return self.run_js("return !!pyodide._module.fail_test;")
+
+ def clear_force_test_fail(self):
+ self.run_js("pyodide._module.fail_test = false;")
+
def save_state(self):
self.run_js("self.__savedState = pyodide._module.saveState();")
@@ -415,14 +422,20 @@ def pytest_runtest_call(item):
trace_pyproxies
and pytest.mark.skip_refcount_check.mark not in item.own_markers
)
- yield from test_wrapper_check_for_memory_leaks(
+ yield from extra_checks_test_wrapper(
selenium, trace_hiwire_refs, trace_pyproxies
)
else:
yield
-def test_wrapper_check_for_memory_leaks(selenium, trace_hiwire_refs, trace_pyproxies):
+def extra_checks_test_wrapper(selenium, trace_hiwire_refs, trace_pyproxies):
+ """Extra conditions for test to pass:
+ 1. No explicit request for test to fail
+ 2. No leaked JsRefs
+ 3. No leaked PyProxys
+ """
+ selenium.clear_force_test_fail()
init_num_keys = selenium.get_num_hiwire_keys()
if trace_pyproxies:
selenium.enable_pyproxy_tracing()
@@ -439,6 +452,8 @@ def test_wrapper_check_for_memory_leaks(selenium, trace_hiwire_refs, trace_pypro
# get_result (we don't want to override the error message by raising a
# different error here.)
a.get_result()
+ if selenium.force_test_fail:
+ raise Exception("Test failure explicitly requested but no error was raised.")
if trace_pyproxies and trace_hiwire_refs:
delta_proxies = selenium.get_num_proxies() - init_num_proxies
delta_keys = selenium.get_num_hiwire_keys() - init_num_keys
| diff --git a/src/tests/test_pyodide.py b/src/tests/test_pyodide.py
--- a/src/tests/test_pyodide.py
+++ b/src/tests/test_pyodide.py
@@ -487,6 +487,24 @@ def __del__(self):
)
+def test_return_destroyed_value(selenium):
+ selenium.run_js(
+ """
+ self.f = function(x){ return x };
+ pyodide.runPython(`
+ from pyodide import create_proxy, JsException
+ from js import f
+ p = create_proxy([])
+ p.destroy()
+ try:
+ f(p)
+ except JsException as e:
+ assert str(e) == "Error: Object has already been destroyed"
+ `);
+ """
+ )
+
+
def test_docstrings_a():
from _pyodide.docstring import get_cmeth_docstring, dedent_docstring
from pyodide import JsProxy
| Low level crash due to destroyed proxy
The following code causes a low level failure:
```js
globalThis.f = function(x){ return x };
pyodide.runPython(`
from pyodide import create_proxy
from js import f
p = create_proxy([])
p.destroy()
f(p)
`);
```
This is a regression caused by #1573. Discovered while writing #1840.
| 2021-09-17T04:56:24 |
|
pyodide/pyodide | 1,895 | pyodide__pyodide-1895 | [
"1852"
] | 4c61ce980d71cf6a8968c3008a64dbffdacaccff | diff --git a/pyodide-build/pyodide_build/buildpkg.py b/pyodide-build/pyodide_build/buildpkg.py
--- a/pyodide-build/pyodide_build/buildpkg.py
+++ b/pyodide-build/pyodide_build/buildpkg.py
@@ -63,6 +63,23 @@ def close(self):
self._fd_write = None
+def _have_terser():
+ try:
+ # Check npm exists and terser is installed locally
+ subprocess.run(
+ [
+ "npm",
+ "list",
+ "terser",
+ ],
+ stdout=subprocess.DEVNULL,
+ )
+ except subprocess.CalledProcessError:
+ return False
+
+ return True
+
+
def check_checksum(path: Path, pkg: Dict[str, Any]):
"""
Checks that a tarball matches the checksum in the package metadata.
@@ -281,7 +298,9 @@ def unvendor_tests(install_prefix: Path, test_install_prefix: Path) -> int:
return n_moved
-def package_files(buildpath: Path, srcpath: Path, pkg: Dict[str, Any]) -> None:
+def package_files(
+ buildpath: Path, srcpath: Path, pkg: Dict[str, Any], compress: bool = False
+) -> None:
"""Package the installation folder into .data and .js files
Parameters
@@ -324,10 +343,19 @@ def package_files(buildpath: Path, srcpath: Path, pkg: Dict[str, Any]) -> None:
cwd=buildpath,
check=True,
)
- subprocess.run(
- ["uglifyjs", buildpath / (name + ".js"), "-o", buildpath / (name + ".js")],
- check=True,
- )
+
+ if compress:
+ subprocess.run(
+ [
+ "npx",
+ "--no-install",
+ "terser",
+ buildpath / (name + ".js"),
+ "-o",
+ buildpath / (name + ".js"),
+ ],
+ check=True,
+ )
# Package tests
if n_unvendored > 0:
@@ -342,15 +370,19 @@ def package_files(buildpath: Path, srcpath: Path, pkg: Dict[str, Any]) -> None:
cwd=buildpath,
check=True,
)
- subprocess.run(
- [
- "uglifyjs",
- buildpath / (name + "-tests.js"),
- "-o",
- buildpath / (name + "-tests.js"),
- ],
- check=True,
- )
+
+ if compress:
+ subprocess.run(
+ [
+ "npx",
+ "--no-install",
+ "terser",
+ buildpath / (name + "-tests.js"),
+ "-o",
+ buildpath / (name + "-tests.js"),
+ ],
+ check=True,
+ )
with open(buildpath / ".packaged", "wb") as fd:
fd.write(b"\n")
@@ -428,7 +460,7 @@ def build_package(path: Path, args):
# i.e. they need package running, but not compile
if not pkg.get("build", {}).get("sharedlibrary"):
compile(path, srcpath, pkg, args, bash_runner)
- package_files(buildpath, srcpath, pkg)
+ package_files(buildpath, srcpath, pkg, compress=args.compress_package)
finally:
bash_runner.close()
os.chdir(orig_path)
@@ -488,11 +520,23 @@ def make_parser(parser: argparse.ArgumentParser):
"needed if you want to build other packages that depend on this one."
),
)
+ parser.add_argument(
+ "--no-compress-package",
+ action="store_false",
+ default=True,
+ dest="compress_package",
+ help="Do not compress built packages.",
+ )
return parser
def main(args):
path = Path(args.package[0]).resolve()
+ if args.compress_package and not _have_terser():
+ raise RuntimeError(
+ "Terser is required to compress packages. Try `npm install -g terser` to install terser."
+ )
+
build_package(path, args)
| Replace uglifyjs with terser
We already install terser in the js package, so we might as well drop uglifyjs dependency and use terser only.
https://github.com/terser/terser#why-choose-terser
Replacing `uglifyjs` with `terser` in [`buildpkg.py`](https://github.com/pyodide/pyodide/blob/main/pyodide-build/pyodide_build/buildpkg.py) + removing the dependency should be enough.
| 2021-10-21T02:31:08 |
||
pyodide/pyodide | 2,019 | pyodide__pyodide-2019 | [
"1577"
] | 418813de331e8222e17b35f124043289bd6b0936 | diff --git a/packages/fpcast-test/fpcast-test/setup.py b/packages/fpcast-test/fpcast-test/setup.py
new file mode 100644
--- /dev/null
+++ b/packages/fpcast-test/fpcast-test/setup.py
@@ -0,0 +1,18 @@
+from setuptools import setup, Extension
+
+setup(
+ name="fpcast-test",
+ version="0.1.1",
+ author="Hood Chatham",
+ author_email="[email protected]",
+ description="Test function pointer casts.",
+ long_description_content_type="text/markdown",
+ classifiers=[
+ "Programming Language :: Python :: 3",
+ "License :: OSI Approved :: MIT License",
+ "Operating System :: OS Independent",
+ ],
+ # packages=["fpcast_test"],
+ ext_modules=[Extension("fpcast_test", ["fpcast-test.c"])]
+ # python_requires='>=3.6',
+)
diff --git a/pyodide-build/pyodide_build/common.py b/pyodide-build/pyodide_build/common.py
--- a/pyodide-build/pyodide_build/common.py
+++ b/pyodide-build/pyodide_build/common.py
@@ -14,7 +14,7 @@ def _parse_package_subset(query: Optional[str]) -> Set[str]:
Supports folowing meta-packages,
- 'core': corresponds to packages needed to run the core test suite
- {"micropip", "pyparsing", "pytz", "packaging", "Jinja2"}. This is the default option
+ {"micropip", "pyparsing", "pytz", "packaging", "Jinja2", "fpcast-test"}. This is the default option
if query is None.
- 'min-scipy-stack': includes the "core" meta-package as well as some of the
core packages from the scientific python stack and their dependencies:
@@ -32,7 +32,15 @@ def _parse_package_subset(query: Optional[str]) -> Set[str]:
if query is None:
query = "core"
- core_packages = {"micropip", "pyparsing", "pytz", "packaging", "Jinja2", "regex"}
+ core_packages = {
+ "micropip",
+ "pyparsing",
+ "pytz",
+ "packaging",
+ "Jinja2",
+ "regex",
+ "fpcast-test",
+ }
core_scipy_packages = {
"numpy",
"scipy",
| diff --git a/docs/development/testing.md b/docs/development/testing.md
--- a/docs/development/testing.md
+++ b/docs/development/testing.md
@@ -31,12 +31,6 @@ There are 3 test locations that are collected by pytest,
(do not require selenium to run)
- `packages/*/test_*`: package specific tests.
-Additionally you can run emsdk specific tests with,
-
-```
-make -C emsdk test
-```
-
### Running the JavaScript test suite
To run tests on the JavaScript Pyodide package using Mocha, run the following commands,
diff --git a/emsdk/tests/__init__.py b/emsdk/tests/__init__.py
deleted file mode 100644
diff --git a/emsdk/tests/common.py b/emsdk/tests/common.py
deleted file mode 100644
--- a/emsdk/tests/common.py
+++ /dev/null
@@ -1,33 +0,0 @@
-MAIN_C = r"""
-#include <stdio.h>
-#include <dlfcn.h>
-#include <setjmp.h>
-
-void never_called()
-{
- jmp_buf buf;
- int i=setjmp(buf);
- longjmp(buf,1);
-}
-
-
-int main() {
- puts("hello from main");
- void *handle = dlopen("library.wasm", RTLD_NOW);
- if (!handle) {
- puts("cannot load side module");
- puts(dlerror());
- return 1;
- }
- typedef void (*type_v)();
- type_v side_func = (type_v) dlsym(handle, "foo");
- if (!side_func) {
- puts("cannot load side function");
- puts(dlerror());
- return 1;
- } else {
- side_func();
- }
- return 0;
-}
-"""
diff --git a/emsdk/tests/test_dyncall.py b/emsdk/tests/test_dyncall.py
deleted file mode 100644
--- a/emsdk/tests/test_dyncall.py
+++ /dev/null
@@ -1,74 +0,0 @@
-import subprocess
-from . import common
-
-
-def test_dyncall(tmpdir):
- with tmpdir.as_cwd():
- with open("library.c", "w") as f:
- f.write(
- """\
-#include <stdio.h>
-#include <stdlib.h>
-#include <setjmp.h>
-#include <assert.h>
-
-// This can be any function that has a signature not found in main.
-__attribute__ ((noinline)) int indirect_function(int a, float b, int c, double d) {
- return a;
-}
-
-typedef int (*type_iifid) (int, float, int, double);
-
-void foo() {
- // Hack to force inclusion of malloc
- volatile int x = (int) malloc(1);
- free((void *) x);
-
- type_iifid fp = &indirect_function;
-
- jmp_buf buf;
- int i = setjmp(buf);
-
- printf("%d\\n", i);
- assert(fp(i, 0, 0, 0) == i);
-
- if (i == 0) longjmp(buf, 1);
-
-}
-"""
- )
- with open("main.c", "w") as f:
- f.write(common.MAIN_C)
-
- subprocess.run(
- [
- "emcc",
- "-g4",
- "-s",
- "SIDE_MODULE=1",
- "library.c",
- "-o",
- "library.wasm",
- "-s",
- "EMULATE_FUNCTION_POINTER_CASTS=1",
- "-s",
- "EXPORT_ALL=1",
- ],
- check=True,
- )
- subprocess.run(
- [
- "emcc",
- "-g4",
- "-s",
- "MAIN_MODULE=1",
- "main.c",
- "--embed-file",
- "library.wasm",
- "-s",
- "EMULATE_FUNCTION_POINTER_CASTS=1",
- ],
- check=True,
- )
- out = subprocess.run(["node", "a.out.js"], capture_output=True, check=True)
- assert out.stdout == b"hello from main\n0\n1\n"
diff --git a/emsdk/tests/test_emulate.py b/emsdk/tests/test_emulate.py
deleted file mode 100644
--- a/emsdk/tests/test_emulate.py
+++ /dev/null
@@ -1,55 +0,0 @@
-import subprocess
-from . import common
-
-
-def test_emulate_function(tmpdir):
- with tmpdir.as_cwd():
- with open("library.c", "w") as f:
- f.write(
- """\
-#include <stdio.h>
-
-// emulate function pointer casts - this has the
-// wrong arguments and return type
-int foo(int extra_args) {
- puts("hello from library");
- return 0;
-}"""
- )
- with open("main.c", "w") as f:
- f.write(common.MAIN_C)
-
- subprocess.run(
- [
- "emcc",
- "-g",
- "-s",
- "SIDE_MODULE=1",
- "library.c",
- "-o",
- "library.wasm",
- "-s",
- "EMULATE_FUNCTION_POINTER_CASTS=1",
- "-s",
- "EXPORT_ALL=1",
- ],
- check=True,
- )
- subprocess.run(
- [
- "emcc",
- "-g",
- "-s",
- "MAIN_MODULE=1",
- "main.c",
- "--embed-file",
- "library.wasm",
- "-s",
- "EMULATE_FUNCTION_POINTER_CASTS=1",
- "-s",
- "EXPORT_ALL=1",
- ],
- check=True,
- )
- out = subprocess.run(["node", "a.out.js"], capture_output=True, check=True)
- assert out.stdout == b"hello from main\nhello from library\n"
diff --git a/packages/cffi_example/test_cffi_example.py b/packages/cffi_example/test_cffi_example.py
--- a/packages/cffi_example/test_cffi_example.py
+++ b/packages/cffi_example/test_cffi_example.py
@@ -2,6 +2,13 @@
from pyodide_build.testing import run_in_pyodide
+CHROME_FAIL_v90_MSG = (
+ "Doesn't work in chrome v89 or v90, I think because of "
+ "https://bugs.chromium.org/p/chromium/issues/detail?id=1200031. "
+ "Confirmed locally to work in v91 and v96, and to break on v90."
+)
+
+
@pytest.mark.parametrize(
"pattern,name,flags,expected",
[
@@ -13,6 +20,8 @@
)
def test_fnmatch(selenium_module_scope, pattern, name, flags, expected):
selenium = selenium_module_scope
+ if selenium.browser == "chrome":
+ pytest.xfail(CHROME_FAIL_v90_MSG)
selenium.load_package("cffi_example")
result = selenium.run(
f"""
@@ -23,7 +32,11 @@ def test_fnmatch(selenium_module_scope, pattern, name, flags, expected):
assert result == expected
-@run_in_pyodide(packages=["cffi_example"], module_scope=True)
+@run_in_pyodide(
+ packages=["cffi_example"],
+ module_scope=True,
+ xfail_browsers={"chrome": CHROME_FAIL_v90_MSG},
+)
def test_person():
from cffi_example.person import Person
diff --git a/packages/fpcast-test/test_fpcast_test.py b/packages/fpcast-test/test_fpcast_test.py
new file mode 100644
--- /dev/null
+++ b/packages/fpcast-test/test_fpcast_test.py
@@ -0,0 +1,46 @@
+from pyodide_build.testing import run_in_pyodide
+
+
+@run_in_pyodide(packages=["fpcast-test"])
+def test_fpcasts():
+ import fpcast_test
+
+ fpcast_test.noargs0()
+ fpcast_test.noargs1()
+ fpcast_test.noargs2()
+ fpcast_test.noargs3()
+
+ fpcast_test.varargs0()
+ fpcast_test.varargs1()
+ fpcast_test.varargs2()
+ fpcast_test.varargs3()
+
+ fpcast_test.kwargs0()
+ fpcast_test.kwargs1()
+ fpcast_test.kwargs2()
+ fpcast_test.kwargs3()
+
+ fpcast_test.Callable0()()
+ fpcast_test.Callable1()()
+ fpcast_test.Callable2()()
+ fpcast_test.Callable3()()
+
+ t = fpcast_test.TestType()
+ t.noargs0()
+ t.noargs1()
+ t.noargs2()
+ t.noargs3()
+
+ t.varargs0()
+ t.varargs1()
+ t.varargs2()
+ t.varargs3()
+
+ t.kwargs0()
+ t.kwargs1()
+ t.kwargs2()
+ t.kwargs3()
+
+ t.getset0
+ t.getset1
+ t.getset1 = 5
diff --git a/pyodide-build/pyodide_build/tests/test_common.py b/pyodide-build/pyodide_build/tests/test_common.py
--- a/pyodide-build/pyodide_build/tests/test_common.py
+++ b/pyodide-build/pyodide_build/tests/test_common.py
@@ -42,6 +42,7 @@ def test_parse_package_subset():
"Jinja2",
"micropip",
"regex",
+ "fpcast-test",
}
# by default core packages are built
assert _parse_package_subset(None) == _parse_package_subset("core")
@@ -53,6 +54,7 @@ def test_parse_package_subset():
"Jinja2",
"micropip",
"regex",
+ "fpcast-test",
"numpy",
"scipy",
"pandas",
| fpcast emulation tracing and removing
We currently build with function pointer cast emulation. It is really hard to remove this because it is hard to know where bad function calls are being made (i.e. where an indirect call is made where the call arguments don't match the target function arguments).
I would suggest that what is needed is some kind of fpcast_emulation with tracing build, then run the full test suite on that, hopefully it would hit the majority of the function pointer casting shenanigans. In theory this was fixed in python in 3.8, so we should be good with core python, but we do bundle some other stuff for images and fonts etc. that might cause issues. The patches to scipy/numpy removed most issues there too, but there may be some that I missed.
I suspect the easiest way to do this is to make a build which in emscripten replaces indirect calls to a call to a python indirect call trampoline, so that rather than call to fpcast versions, it calls to something in imports which is javascript and does the actual call. I think that side of things would need a change to visitCallIndirect in the binaryen pass to just make it call the javascript trampoline with the signature of the indirect call as a parameter. Then the fpcast trampoline could check that against the signature of what it was calling (using the same , and chuck out a log message if they didn't match. This would be way more efficient if webassembly type reflection was enabled otherwise some manual type reflection cunningness would be needed to get the type of the target. Type reflection is behind a flag in chromium (enable experimental webassembly features).
I imagine it would be slow as a slow thing, but for testing it would do the job I imagine.
I would suggest it isn't much work to update binaryen to call the trampoline, then creating a build that works in chromium with webassembly reflection enabled. It might be that if one can get pyodide itself loading in chromium and run the test suite, that is enough to exercise the code enough to find out what is the cause of the naughty pointer calls and sort it.
I won't have time to do anything till teaching and marking ends (i.e. june/july at earliest), but if anyone wants to do it, I'm happy to input.
| Thanks for opening this issue with a detailed plan @joemarshall !
Personally I also won't have much availability to look into this in the near future, but I very much look forward to getting rid of fpcast emulation. It should also help for Python performance #1120 as far as I understand.
I had a tiny fiddle with this - interestingly pyodide itself now loads, I can run code in the JS console before the python console.html loads, but I can't run anything in the console without hitting a failure somewhere in jsproxy.
Latest chrome canary plus webassembly debugging however means that
a) you get the exception about null pointer or wrong pointer type
b) you actually can get the code lines and stuff that is failing in the C code now.
Something is broken in python waiting for js promises. I'll have a peek at how that works.
Good news is that it is now debuggable without having to do the logging thing...
Aha, python core seems to mostly work nowadays, but in jsproxy / asyncio it breaks, calling asyncio.ensure_future on a javascript promise doesn't work and causes a crash that I will describe below:
Firstly, a fail case - if I stick this in javascript console, I can reproduce it without having to step through shed loads of console stuff. Just load console.html as normal, then run this in the js console before you type anything in the web page. I think something in jsproxy has the wrong function signature.
```
p=Promise.resolve('Great')
pyodide.runPython('import js,asyncio\nprint(js.p)\nasyncio.ensure_future(js.p)\n')
```
```
Uncaught RuntimeError: null function or function signature mismatch
at wrap_unaryfunc (typeobject.c:5661)
at wrapperdescr_raw_call (descrobject.c:483)
at wrapper_call (descrobject.c:1303)
at _PyObject_MakeTpCall (call.c:159)
at _PyObject_Vectorcall (abstract.h:125)
at call_function (ceval.c:4999)
at _PyEval_EvalFrameDefault (ceval.c:3475)
at PyEval_EvalFrameEx (ceval.c:741)
at gen_send_ex (genobject.c:222)
at _PyGen_Send (genobject.c:292)
```
Running a python trace (see below), it is hitting the failure within the python code here:
https://github.com/python/cpython/blob/v3.8.2/Lib/asyncio/tasks.py#L684
```
pyodide.runPython("import sys\ndef tf(a,b,c):\n print(a,b,c)\n return tf\nsys.settrace(tf)\n")
```
```
<frame at 0xc75100, file '/lib/python3.8/asyncio/events.py', line 79, code _run> call None
pyodide.asm.js:3708 <frame at 0xc75100, file '/lib/python3.8/asyncio/events.py', line 80, code _run> line None
pyodide.asm.js:3708 <frame at 0xc75100, file '/lib/python3.8/asyncio/events.py', line 81, code _run> line None
pyodide.asm.js:3708 <frame at 0xc9f3e8, file '/lib/python3.8/asyncio/tasks.py', line 263, code __step> call None
pyodide.asm.js:3708 <frame at 0xc9f3e8, file '/lib/python3.8/asyncio/tasks.py', line 264, code __step> line None
pyodide.asm.js:3708 <frame at 0xca00f0, file '/lib/python3.8/asyncio/futures.py', line 157, code done> call None
pyodide.asm.js:3708 <frame at 0xca00f0, file '/lib/python3.8/asyncio/futures.py', line 163, code done> line None
pyodide.asm.js:3708 <frame at 0xca00f0, file '/lib/python3.8/asyncio/futures.py', line 163, code done> return False
pyodide.asm.js:3708 <frame at 0xc9f3e8, file '/lib/python3.8/asyncio/tasks.py', line 267, code __step> line None
pyodide.asm.js:3708 <frame at 0xc9f3e8, file '/lib/python3.8/asyncio/tasks.py', line 271, code __step> line None
pyodide.asm.js:3708 <frame at 0xc9f3e8, file '/lib/python3.8/asyncio/tasks.py', line 272, code __step> line None
pyodide.asm.js:3708 <frame at 0xc9f3e8, file '/lib/python3.8/asyncio/tasks.py', line 274, code __step> line None
pyodide.asm.js:3708 <frame at 0xc75280, file '/lib/python3.8/asyncio/tasks.py', line 929, code _enter_task> call None
pyodide.asm.js:3708 <frame at 0xc75280, file '/lib/python3.8/asyncio/tasks.py', line 930, code _enter_task> line None
pyodide.asm.js:3708 <frame at 0xc75280, file '/lib/python3.8/asyncio/tasks.py', line 931, code _enter_task> line None
pyodide.asm.js:3708 <frame at 0xc75280, file '/lib/python3.8/asyncio/tasks.py', line 934, code _enter_task> line None
pyodide.asm.js:3708 <frame at 0xc75280, file '/lib/python3.8/asyncio/tasks.py', line 934, code _enter_task> return None
pyodide.asm.js:3708 <frame at 0xc9f3e8, file '/lib/python3.8/asyncio/tasks.py', line 276, code __step> line None
pyodide.asm.js:3708 <frame at 0xc9f3e8, file '/lib/python3.8/asyncio/tasks.py', line 277, code __step> line None
pyodide.asm.js:3708 <frame at 0xc9f3e8, file '/lib/python3.8/asyncio/tasks.py', line 280, code __step> line None
pyodide.asm.js:3708 <frame at 0x89f358, file '/lib/python3.8/asyncio/tasks.py', line 677, code _wrap_awaitable> call None
pyodide.asm.js:3708 <frame at 0x89f358, file '/lib/python3.8/asyncio/tasks.py', line 684, code _wrap_awaitable> line None
```
Aha,
```
slots[cur_slot++] =
(PyType_Slot){ .slot = Py_am_await, .pfunc = (void*)JsProxy_Await };
```
but JsProxy_Await needs to be a unaryfunction to go in the Py_am_await typeslot
https://docs.python.org/3/c-api/typeobj.html#c.unaryfunc
It doesn't use args, so maybe just a typo.
oh yeah, core python works (or at least the console appears to run okay) if I fix that, even with fpcast turned off! That's cool.
That great news! cc @hoodmane regarding the `JsProxy_Await` function signature.
Does it mean that we could be build CPython at least without `EMULATE_FUNCTION_POINTER_CASTS` or do we need to build all either with or without?
We need to build everything with or without. Bad things will happen otherwise.
There's also a patch needed in cpython with dictionaries which makes the tests fail right now. Can't reverse the items or values in a dictionary, there are casts in dictobject between for a function taking one parameter to a pycfunction which should take two args and ignore the last. It is fixed for reversing the dict keys, but not for the items. Needs fixing in upstream python too.
> Needs fixing in upstream python too.
Hmm, this might be a good occasion to migrate to Python 3.9 as well https://github.com/pyodide/pyodide/issues/978.
> It doesn't use args, so maybe just a typo.
I got confused because `METH_NOARGS` functions are supposed to take an argument that they ignore.
Yeah, that's the same issue in cpython dict. I've pushed that fix to python core (https://bugs.python.org/issue44114), and it is going to be backported to 3.9 but not 3.8, so will still need a patch in pyodide until it works with 3.9. Tests for core python pass with this patch.
Still crashes on import of quite a few packages though. There are a bunch of similar errors in some of the C packages. They're quite easy to find, but it is going to be a bit whack-a-mole to find them all. Numpy for example has some multiarray getters which ignore extra void* parameters.
Incidentally, I think there would be value in trying to get cpython maintainers to add CI for webassembly into core cpython testing, so that core python at least doesn't rely on undefined c behaviour. I don't know how one does that though, I only know how to push little patches to python.
> They're quite easy to find, but it is going to be a bit whack-a-mole to find them all.
> [..]
> Incidentally, I think there would be value in trying to get cpython maintainers to add CI for webassembly into core cpython testing, so that core python at least doesn't rely on undefined c behaviour
GCC has an option to warn on invalid function casts `-Wcast-function-type`, wouldn't that be sufficient for this issue? That's how they detected the issue initially in CPython https://bugs.python.org/issue33012
Asking to compile with `-Werror=cast-function-type` would be much easier than setting up a WASM build, particularly for all other packages. For instance numpy built with `CFLAGS="-Werror=cast-function-type" python setup.py install` allows to detect some of those errors I think.
> We need to build everything with or without. Bad things will happen otherwise.
Could you elaborate on why? We can fix CPython and the core packages, but my concern is how to make sure things won't break if users compile some third party package themselves. So if anything could be done to improve this situation in emscripten it would likely be easier than trying to fix the whole Python ecosystem.
> to add CI for webassembly into core cpython
https://bugs.python.org/issue40280 would probably be a first step
I thought `-Werror=cast-function-type` would do the job. Two things don't work with it though -
1) cpython relies on incorrect function casts being cool anywhere it passes a python function, e.g. PyMethodDef.ml_meth is defined as a PyCFunction, but can actually be: _PyCFunctionFast, PyCMethod, PyCFunctionWithKeywords etc. So you basically can't make a python module without doing at least some naughty casting behaviour right now.
2) I don't think it works in emsdk, at least I can build things with it turned on and bad things still happen. I think -werror=blah is a bit flaky in emcc, but I can't work out why well enough to put a bug report anywhere (llvm? emscripten?)
From https://bugs.python.org/issue33012 this needs fixing before anything else. It would need care to make sure that all c python modules didn't need updating though I think, so I don't see it happening soon...
> This is not fixing the underlying issue but hiding it. The right fix would be to use a union for ml_meth providing members for the 3 different function. So the developer could assign them correctly and the compiler would warn if he would do something wrong. Casting to (void *) is just hiding the problem not fixing it!
Oh, and the reason fpcast is an all or nothing deal is because it changes the core linking of emscripten modules. If fpcast is on, any time you make a function pointer, it will point to the fpcast version, and any calls to function pointers will expect the signature to be the fixed parameter fpcast version. So you won't be able to pass function pointers between non-fpcast and fpcast modules, which means python modules have to be built with the same mode as the core python.
Hey,
This branch here builds with fpcast turned off, and in chrome and firefox it even loads numpy alright and you can do things with a numpy array. All tests for numpy pass in firefox also.
Module loading is faster.
I had to up the memory to make it work, now it is 41mb. I'm not 100% sure if that is because of all the debug settings I have on.
https://github.com/joemarshall/pyodide/tree/fpcast
Things that don't work:
1) If I turn wasm_bigint on, bad things happen. Not sure why. Could be a memory thing.
2) Chrome tests fail with memory errors. Not sure if it just needs even more memory (I already doubled it), this might be because I have all sorts of debugging things turned on, I will try a non-debug build to see.
Loading numpy in completely release build with no fpcast, total time for loadpackage plus "import numpy" is 1.8 seconds.
Loading it in the old build on the same machine, approx 10 seconds.
That's great @joemarshall !
> Could be a memory thing.
Yes, I also encountered a a few more errors in the test suite with the debug build in the past, so it might not be representative.
For the numpy patch,
> This isn't used in numpy but needs to be there even if it is ignored.
So that's not something that could be contributed upstream? I happens if numpy is built with gcc using that patch. I can check tomorrow.
> It would need care to make sure that all c python modules didn't need updating though I think, so I don't see it happening soon...
Breaking C Python modules would indeed likely be a no go. But so in the end it's not an issue since you managed to make it work?
> Loading numpy in completely release build with no fpcast, total time for loadpackage plus "import numpy" is 1.8 seconds.
>
> Loading it in the old build on the same machine, approx 10 seconds.
Wow, that huge progress! I'll try to experiment a bit more with your branch later this week.
The numpy patch I think could go upstream, what I meant there is that the argument needs to be on the function even if nothing ever uses it.
I still see weird memory errors in chrome tests in release build, even on things that look to work fine in the browser. It is weird, because I would expect fpcast errors to make complete crashes not memory errors, so I suspect it is something to do with how the compile happens differently without fpcast that means that things need more memory somehow (and maybe there is a selenium setting relating to memory that I don't know about?).
I've chucked a pull request into numpy now for that getter patch.
I rebuilt without debug and with -O3 rather than -Os and it happily runs numpy tests.
Things that fail:
matplotlib
optlib (chrome only)
scipy
sklearn
pytest
pywavelets (chrome only)
zarr (chrome only)
The big problem is scipy and fortran. F2C generates a lot of code that looks like: `int dpvd_(S_fp fcn, integer *n, integer *m, integer *np,)`
using types from f2c.h like S_fp, which are defined as `typedef /* Subroutine */ int (*S_fp)();` i.e. without arguments, then called with whatever arguments are required.
There is an option behind an ifdef __cplusplus to define them as varargs functions, like so:
```
#ifdef __cplusplus
typedef int /* Unknown procedure type */ (*U_fp)(...);
```
I don't know what if anything is ever calling these - i.e. whether they're exposed from python at all, and how it handles function pointers.
There is one thing in ARPACK which causes python to fail, the debug and timing structures are initialized using one of these pointers, it fails in `f2pyinitdebug_` in `packages/scipy/build/scipy-0.17.1/build/src.linux-x86_64-3.8/build/src.linux-x86_64-3.8/scipy/sparse/linalg/eigen/arpack/_arpack-f2pywrappers.c`, because it passes in something with arguments casted to an (*int)(), then calls that with a load of arguments. I think changing f2py could fix that one. It may be that these function pointer methods aren't actually called by anything python, but I don't quite know.
Aha, I fixed those, was just the return value. Because it's c, if happily makes up the parameters to the call okay. 50% of scipy tests work now. Something not working in scipy linalg tests still, but probably fixable.
Aww, that's great news! Thanks for your work on this @joemarshall !
For scipy, it might be also worth considering updating the version before doing much more work on fixing function signatures there. I started to work on it in https://github.com/pyodide/pyodide/pull/1293 but it's a lot of patches to migrate. It's a bit orthogonal, but adding more scipy patches is also not going to make that work easier.
It turned out to be an incorrect signature in numpy actually. Now that is fixed, most of the tests pass.
matplotlib still fails with incorrect function signatures, as does test_pytest and test_skimage, but the rest of the packages passed tests.
I have updated it on my server, but my students mostly do quite simple things, so aren't going to stress test things.
I think that it *might* be worth pushing this across to main pyodide, but on the other hand there is a risk that it makes odd errors happen in packages that aren't caught by the tests, especially in big packages that don't have high test coverage.
Great! We are planning a 0.18 release end of July and I think it might be a bit short to merge it and test it extensively for this release. I would be +1 to merge it just after the release and include it in 0.19. We can do alpha releases in between 0.18 and 0.19 with it if necessary as well.
Could you open a PR from your branch? It might also need some adjustments since in other changes @hoodmane migrated to Python 3.9 and added ctypes support.
> there is a risk that it makes odd errors happen in packages that aren't caught by the tests, especially in big packages that don't have high test coverage.
If it isn't tested we should assume that it's currently broken and that it will be broken in different ways after whatever changes you make.
Yes, in general we should start running package test suites at least in a CI cron job / or on demand https://github.com/pyodide/pyodide/issues/69
I've done a pull request now. I got the 3.9 stuff merged in, but I haven't pulled in the latest commits so it doesn't clean merge right now. | 2021-12-05T20:28:44 |
pyodide/pyodide | 2,022 | pyodide__pyodide-2022 | [
"2011"
] | 17f675a89e28aa17d0b9c9697cf0a49402c4de53 | diff --git a/src/py/pyodide/webloop.py b/src/py/pyodide/webloop.py
--- a/src/py/pyodide/webloop.py
+++ b/src/py/pyodide/webloop.py
@@ -143,7 +143,13 @@ def call_later(
if delay < 0:
raise ValueError("Can't schedule in the past")
h = asyncio.Handle(callback, args, self, context=context)
- setTimeout(create_once_callable(h._run), delay * 1000)
+
+ def run_handle():
+ if h.cancelled():
+ return
+ h._run()
+
+ setTimeout(create_once_callable(run_handle), delay * 1000)
return h
def call_at(
| diff --git a/src/tests/test_webloop.py b/src/tests/test_webloop.py
--- a/src/tests/test_webloop.py
+++ b/src/tests/test_webloop.py
@@ -33,6 +33,26 @@ async def sleep_task():
)
+def test_cancel_handle(selenium_standalone):
+ selenium_standalone.run_js(
+ """
+ await pyodide.runPythonAsync(`
+ import asyncio
+ loop = asyncio.get_event_loop()
+ exc = []
+ def exception_handler(loop, context):
+ exc.append(context)
+ loop.set_exception_handler(exception_handler)
+ try:
+ await asyncio.wait_for(asyncio.sleep(1), 2)
+ finally:
+ loop.set_exception_handler(None)
+ assert not exc
+ `);
+ """
+ )
+
+
def test_return_result(selenium):
# test return result
run_with_resolve(
@@ -172,7 +192,9 @@ def f():
)
-def test_webloop_exception_handler(selenium):
[email protected](reason="Works locally but failing in test suite as of #2022.")
+def test_webloop_exception_handler(selenium_standalone):
+ selenium = selenium_standalone
selenium.run_async(
"""
import asyncio
| exception in asyncio.wait_for()
## π Bug
Waiting for an awaitable using `asyncio.wait_for()` prints exceptions to the browser's console:
```
pyodide.asm.js:14 Exception in callback None()
pyodide.asm.js:14 handle: <Handle cancelled>
```
This happens whether the awaitable finishes before the timeout or not. `asyncio.wait_for()` appears to still function as intended, but this makes applications that use `asyncio.wait_for()` frequently very spammy.
### To Reproduce
In [Pyodide console application](https://pyodide.org/en/stable/console.html), enter the following:
```
import asyncio
await asyncio.wait_for(asyncio.sleep(1), 2)
```
### Expected behavior
No exceptions are raised and no output is printed to the console.
### Environment
- Pyodide Version: 0.18.1 from https://cdn.jsdelivr.net/pyodide/v0.18.1/full/
- Browser version: Chrome 96.0.4664.55
- Any other relevant information:
| Thanks for the report! This is probably a bug in our event loop. | 2021-12-05T23:36:32 |
pyodide/pyodide | 2,065 | pyodide__pyodide-2065 | [
"549"
] | f44f7b3dfef1075ce6cbe470fa7aff2ee9138438 | diff --git a/pyodide-build/pyodide_build/_f2c_fixes.py b/pyodide-build/pyodide_build/_f2c_fixes.py
--- a/pyodide-build/pyodide_build/_f2c_fixes.py
+++ b/pyodide-build/pyodide_build/_f2c_fixes.py
@@ -1,173 +1,259 @@
import re
+import subprocess
+from textwrap import dedent # for doctests
+from typing import List, Iterable, Iterator, Tuple
+from pathlib import Path
-def fix_f2c_clapack_calls(f2c_output_name: str):
- """Fix F2C CLAPACK calls
+def fix_f2c_output(f2c_output_path: str):
+ """
+ This function is called on the name of each C output file. It fixes up the C
+ output in various ways to compensate for the lack of f2c support for Fortan
+ 90 and Fortran 95.
+ """
+ f2c_output = Path(f2c_output_path)
+ if f2c_output.name == "lapack_extras.c":
+ # dfft.c has a bunch of implicit cast args coming from functions copied
+ # out of future lapack versions. fix_inconsistent_decls will fix all
+ # except string to int.
+ subprocess.check_call(
+ [
+ "patch",
+ str(f2c_output_path),
+ f"../../patches/fix-implicit-cast-args-from-newer-lapack.patch",
+ ]
+ )
+
+ with open(f2c_output, "r") as f:
+ lines = f.readlines()
+ if "id_dist" in f2c_output_path:
+ # Fix implicit casts in id_dist.
+ lines = fix_inconsistent_decls(lines)
+ if "odepack" in f2c_output_path or f2c_output.name == "mvndst.c":
+ # Mark all but one declaration of each struct as extern.
+ if f2c_output.name == "blkdta000.c":
+ # extern marking in blkdata000.c doesn't work properly so we let it
+ # define the one copy of the structs. It doesn't talk about lsa001
+ # at all though, so we need to add a definition of it.
+ lines.append(
+ """
+ struct { doublereal rownd2, pdest, pdlast, ratio, cm1[12], cm2[5], pdnorm;
+ integer iownd2[3], icount, irflag, jtyp, mused, mxordn, mxords;
+ } lsa001_;
+ """
+ )
+ else:
+ add_externs_to_structs(lines)
+
+ if f2c_output.name in [
+ "wrap_dummy_g77_abi.c",
+ "_lapack_subroutine_wrappers.c",
+ "_blas_subroutine_wrappers.c",
+ "_flapack-f2pywrappers.c",
+ ]:
+ lines = remove_ftnlen_args(lines)
+
+ with open(f2c_output, "w") as f:
+ f.writelines(lines)
+
+
+def prepare_doctest(x):
+ return dedent(x).strip().split("\n")
+
+
+def remove_ftnlen_args(lines: List[str]) -> List[str]:
+ """
+ Functions with "character" arguments have these extra ftnlen arguments at
+ the end (which are never used). Other places declare these arguments as
+ "integer" which don't get length arguments. This automates the removal of
+ the problematic arguments.
+
+ >>> print("\\n".join(remove_ftnlen_args(prepare_doctest('''
+ ... /* Subroutine */ int chla_transtypewrp__(char *ret, integer *trans, ftnlen
+ ... ret_len)
+ ... '''))))
+ /* Subroutine */ int chla_transtypewrp__(char *ret, integer *trans)
+
+ >>> print("\\n".join(remove_ftnlen_args(prepare_doctest('''
+ ... /* Subroutine */ int clanhfwrp_(real *ret, char *norm, char *transr, char *
+ ... uplo, integer *n, complex *a, real *work, ftnlen norm_len, ftnlen
+ ... transr_len, ftnlen uplo_len)
+ ... '''))))
+ /* Subroutine */ int clanhfwrp_(real *ret, char *norm, char *transr, char * uplo, integer *n, complex *a, real *work)
+ """
+ new_lines = []
+ for line in regroup_lines(lines):
+ if line.startswith("/* Subroutine */"):
+ line = re.sub(r",\s*ftnlen [a-z]*_len", "", line)
+ new_lines.append(line)
+ return new_lines
+
+
+def add_externs_to_structs(lines: List[str]):
+ """
+ The fortran "common" keyword is supposed to share variables between a bunch
+ of files. f2c doesn't handle this correctly (it isn't possible for it to
+ handle it correctly because it only looks one file at a time).
+
+ We mark all the structs as externs and then (separately) add one non extern
+ version to each file.
+ >>> lines = prepare_doctest('''
+ ... struct { doublereal rls[218];
+ ... integer ils[39];
+ ... } ls0001_;
+ ... struct { doublereal rlsa[22];
+ ... integer ilsa[9];
+ ... } lsa001_;
+ ... struct { integer ieh[2];
+ ... } eh0001_;
+ ... ''')
+ >>> add_externs_to_structs(lines)
+ >>> print("\\n".join(lines))
+ extern struct { doublereal rls[218];
+ integer ils[39];
+ } ls0001_;
+ extern struct { doublereal rlsa[22];
+ integer ilsa[9];
+ } lsa001_;
+ extern struct { integer ieh[2];
+ } eh0001_;
+ """
+ for idx, line in enumerate(lines):
+ if line.startswith("struct"):
+ lines[idx] = "extern " + lines[idx]
- f2c compiles code with fortran linkage, which means that
- strings are passed as char* plus extra length argument at
- the end.
- CLAPACK uses C null terminated strings.
+def regroup_lines(lines: Iterable[str]) -> Iterator[str]:
+ """
+ Make sure that functions and declarations have their argument list only on
+ one line.
+
+ >>> print("\\n".join(regroup_lines(prepare_doctest('''
+ ... /* Subroutine */ int clanhfwrp_(real *ret, char *norm, char *transr, char *
+ ... uplo, integer *n, complex *a, real *work, ftnlen norm_len, ftnlen
+ ... transr_len, ftnlen uplo_len)
+ ... {
+ ... static doublereal psum[52];
+ ... extern /* Subroutine */ int dqelg_(integer *, doublereal *, doublereal *,
+ ... doublereal *, doublereal *, integer *);
+ ... '''))))
+ /* Subroutine */ int clanhfwrp_(real *ret, char *norm, char *transr, char * uplo, integer *n, complex *a, real *work, ftnlen norm_len, ftnlen transr_len, ftnlen uplo_len)
+ {
+ static doublereal psum[52];
+ extern /* Subroutine */ int dqelg_(integer *, doublereal *, doublereal *, doublereal *, doublereal *, integer *);
+
+ """
+ line_iter = iter(lines)
+ for line in line_iter:
+ if not "/* Subroutine */" in line:
+ yield line
+ continue
+
+ is_definition = line.startswith("/* Subroutine */")
+ stop = ")" if is_definition else ";"
+ if stop in line:
+ yield line
+ continue
+
+ sub_lines = [line.rstrip()]
+ for line in line_iter:
+ sub_lines.append(line.strip())
+ if stop in line:
+ break
+ joined_line = " ".join(sub_lines)
+ if is_definition:
+ yield joined_line
+ else:
+ yield from (x + ";" for x in joined_line.split(";")[:-1])
+
+
+def fix_inconsistent_decls(lines: List[str]) -> List[str]:
+ """
+ Fortran functions in id_dist use implicit casting of function args which f2c
+ doesn't support.
- In scipy, we build fortran linkage wrappers for all char* CLAPACK calls.
- We just need to replace the calls and extern definitions in f2c generated code.
+ The fortran equivalent of the following code:
- Annoyingly, we can't just patch the fortran code to use the wrapped names because f2c
- has a limit of 6 character function names.
+ double f(double x){
+ return x + 5;
+ }
+ double g(int x){
+ return f(x);
+ }
+
+ gets f2c'd to:
+
+ double f(double x){
+ return x + 5;
+ }
+ double g(int x){
+ double f(int);
+ return f(x);
+ }
+
+ which fails to compile because the declaration of f type clashes with the
+ definition. Gather up all the definitions in each file and then gathers the
+ declarations and fixes them if necessary so that the declaration matches the
+ definition.
+
+ >>> print("\\n".join(fix_inconsistent_decls(prepare_doctest('''
+ ... /* Subroutine */ double f(double x){
+ ... return x + 5;
+ ... }
+ ... /* Subroutine */ double g(int x){
+ ... extern /* Subroutine */ double f(int);
+ ... return f(x);
+ ... }
+ ... '''))))
+ /* Subroutine */ double f(double x){
+ return x + 5;
+ }
+ /* Subroutine */ double g(int x){
+ extern /* Subroutine */ double f(double);
+ return f(x);
+ }
+ """
+ func_types = {}
+ lines = list(regroup_lines(lines))
+ for line in lines:
+ if not line.startswith("/* Subroutine */"):
+ continue
+ [func_name, types] = get_subroutine_decl(line)
+ func_types[func_name] = types
+
+ for idx, line in enumerate(lines):
+ if not "extern /* Subroutine */" in line:
+ continue
+ decls = line.split(")")[:-1]
+ for decl in decls:
+ [func_name, types] = get_subroutine_decl(decl)
+ if func_name not in func_types or types == func_types[func_name]:
+ continue
+ types = func_types[func_name]
+ l = list(line.partition(func_name + "("))
+ l[2:] = list(l[2].partition(")"))
+ l[2] = ", ".join(types)
+ line = "".join(l)
+ lines[idx] = line
+ return lines
+
+
+def get_subroutine_decl(sub: str) -> Tuple[str, List[str]]:
+ """
+ >>> get_subroutine_decl(
+ ... "extern /* Subroutine */ int dqelg_(integer *, doublereal *, doublereal *, doublereal *, doublereal *, integer *);"
+ ... )
+ ('dqelg_', ['integer *', 'doublereal *', 'doublereal *', 'doublereal *', 'doublereal *', 'integer *'])
"""
- # fmt: off
- lapack_names = [
- "lsame_", "cgbmv_", "cgemm_", "cgemv_", "chbmv_",
- "chemm_", "chemv_", "cher_", "cher2_", "cher2k_", "cherk_",
- "chpmv_", "chpr_", "chpr2_", "csymm_", "csyr2k_", "csyrk_",
- "ctbmv_", "ctbsv_", "ctpmv_", "ctpsv_", "ctrmm_", "ctrmv_",
- "ctrsm_", "ctrsv_", "dgbmv_", "dgemm_", "dgemv_", "dsbmv_",
- "dspmv_", "dspr_", "dspr2_", "dsymm_", "dsymv_", "dsyr_", "dsyr2_",
- "dsyr2k_", "dsyrk_", "dtbmv_", "dtbsv_", "dtpmv_", "dtpsv_",
- "dtrmm_", "dtrmv_", "dtrsm_", "dtrsv_", "sgbmv_", "sgemm_",
- "sgemv_", "ssbmv_", "sspmv_", "sspr_", "sspr2_", "ssymm_",
- "ssymv_", "ssyr_", "ssyr2_", "ssyr2k_", "ssyrk_", "stbmv_",
- "stbsv_", "stpmv_", "stpsv_", "strmm_", "strmv_", "strsm_",
- "strsv_", "zgbmv_", "zgemm_", "zgemv_", "zhbmv_", "zhemm_",
- "zhemv_", "zher_", "zher2_", "zher2k_", "zherk_", "zhpmv_",
- "zhpr_", "zhpr2_", "zsymm_", "zsyr2k_", "zsyrk_", "ztbmv_",
- "ztbsv_", "ztpmv_", "ztpsv_", "ztrmm_", "ztrmv_", "ztrsm_",
- "ztrsv_", "clangb_", "clange_", "clangt_", "clanhb_", "clanhe_",
- "clanhp_", "clanhs_", "clanht_", "clansb_", "clansp_", "clansy_",
- "clantb_", "clantp_", "clantr_", "dlamch_", "dlangb_", "dlange_",
- "dlangt_", "dlanhs_", "dlansb_", "dlansp_", "dlanst_", "dlansy_",
- "dlantb_", "dlantp_", "dlantr_", "slamch_", "slangb_", "slange_",
- "slangt_", "slanhs_", "slansb_", "slansp_", "slanst_", "slansy_",
- "slantb_", "slantp_", "slantr_", "zlangb_", "zlange_", "zlangt_",
- "zlanhb_", "zlanhe_", "zlanhp_", "zlanhs_", "zlanht_", "zlansb_",
- "zlansp_", "zlansy_", "zlantb_", "zlantp_", "zlantr_", "cbdsqr_",
- "cgbbrd_", "cgbcon_", "cgbrfs_", "cgbsvx_", "cgbtrs_", "cgebak_",
- "cgebal_", "cgecon_", "cgees_", "cgeesx_", "cgeev_", "cgeevx_",
- "cgels_", "cgerfs_", "cgesdd_", "cgesvd_", "cgesvx_", "cgetrs_",
- "cggbak_", "cggbal_", "cgges_", "cggesx_", "cggev_", "cggevx_",
- "cgghrd_", "cgtcon_", "cgtrfs_", "cgtsvx_", "cgttrs_", "chbev_",
- "chbevd_", "chbevx_", "chbgst_", "chbgv_", "chbgvd_", "chbgvx_",
- "chbtrd_", "checon_", "cheev_", "cheevd_", "cheevr_", "cheevx_",
- "chegs2_", "chegst_", "chegv_", "chegvd_", "chegvx_", "cherfs_",
- "chesv_", "chesvx_", "chetd2_", "chetf2_", "chetrd_", "chetrf_",
- "chetri_", "chetrs_", "chgeqz_", "chpcon_", "chpev_", "chpevd_",
- "chpevx_", "chpgst_", "chpgv_", "chpgvd_", "chpgvx_", "chprfs_",
- "chpsv_", "chpsvx_", "chptrd_", "chptrf_", "chptri_", "chptrs_",
- "chsein_", "chseqr_", "clacp2_", "clacpy_", "clagtm_", "clahef_",
- "clalsd_", "claqgb_", "claqge_", "claqhb_", "claqhe_", "claqhp_",
- "claqsb_", "claqsp_", "claqsy_", "clarf_", "clarfb_", "clarft_",
- "clarfx_", "clarz_", "clarzb_", "clarzt_", "clascl_", "claset_",
- "clasr_", "clasyf_", "clatbs_", "clatps_", "clatrd_", "clatrs_",
- "clauu2_", "clauum_", "cpbcon_", "cpbequ_", "cpbrfs_", "cpbstf_",
- "cpbsv_", "cpbsvx_", "cpbtf2_", "cpbtrf_", "cpbtrs_", "cpocon_",
- "cporfs_", "cposv_", "cposvx_", "cpotf2_", "cpotrf_", "cpotri_",
- "cpotrs_", "cppcon_", "cppequ_", "cpprfs_", "cppsv_", "cppsvx_",
- "cpptrf_", "cpptri_", "cpptrs_", "cpteqr_", "cptrfs_", "cptsvx_",
- "cpttrs_", "cspcon_", "cspmv_", "cspr_", "csprfs_", "cspsv_",
- "cspsvx_", "csptrf_", "csptri_", "csptrs_", "cstedc_", "cstegr_",
- "cstemr_", "csteqr_", "csycon_", "csymv_", "csyr_", "csyrfs_",
- "csysv_", "csysvx_", "csytf2_", "csytrf_", "csytri_", "csytrs_",
- "ctbcon_", "ctbrfs_", "ctbtrs_", "ctgevc_", "ctgsja_", "ctgsna_",
- "ctgsy2_", "ctgsyl_", "ctpcon_", "ctprfs_", "ctptri_", "ctptrs_",
- "ctrcon_", "ctrevc_", "ctrexc_", "ctrrfs_", "ctrsen_", "ctrsna_",
- "ctrsyl_", "ctrti2_", "ctrtri_", "ctrtrs_", "cungbr_", "cungtr_",
- "cunm2l_", "cunm2r_", "cunmbr_", "cunmhr_", "cunml2_", "cunmlq_",
- "cunmql_", "cunmqr_", "cunmr2_", "cunmr3_", "cunmrq_", "cunmrz_",
- "cunmtr_", "cupgtr_", "cupmtr_", "dbdsdc_", "dbdsqr_", "ddisna_",
- "dgbbrd_", "dgbcon_", "dgbrfs_", "dgbsvx_", "dgbtrs_", "dgebak_",
- "dgebal_", "dgecon_", "dgees_", "dgeesx_", "dgeev_", "dgeevx_", "dgels_",
- "dgerfs_", "dgesdd_", "dgesvd_", "dgesvx_", "dgetrs_", "dggbak_",
- "dggbal_", "dgges_", "dggesx_", "dggev_", "dggevx_", "dgghrd_", "dgtcon_",
- "dgtrfs_", "dgtsvx_", "dgttrs_", "dhgeqz_", "dhsein_", "dhseqr_",
- "dlacpy_", "dlagtm_", "dlalsd_", "dlaqgb_", "dlaqge_", "dlaqsb_",
- "dlaqsp_", "dlaqsy_", "dlarf_", "dlarfb_", "dlarft_", "dlarfx_", "dlarrc_",
- "dlarrd_", "dlarre_", "dlarz_", "dlarzb_", "dlarzt_", "dlascl_", "dlasdq_",
- "dlaset_", "dlasr_", "dlasrt_", "dlasyf_", "dlatbs_", "dlatps_", "dlatrd_",
- "dlatrs_", "dlauu2_", "dlauum_", "dopgtr_", "dopmtr_", "dorgbr_",
- "dorgtr_", "dorm2l_", "dorm2r_", "dormbr_", "dormhr_", "dorml2_",
- "dormlq_", "dormql_", "dormqr_", "dormr2_", "dormr3_", "dormrq_",
- "dormrz_", "dormtr_", "dpbcon_", "dpbequ_", "dpbrfs_", "dpbstf_", "dpbsv_",
- "dpbsvx_", "dpbtf2_", "dpbtrf_", "dpbtrs_", "dpocon_", "dporfs_", "dposv_",
- "dposvx_", "dpotf2_", "dpotrf_", "dpotri_", "dpotrs_", "dppcon_",
- "dppequ_", "dpprfs_", "dppsv_", "dppsvx_", "dpptrf_", "dpptri_", "dpptrs_",
- "dpteqr_", "dptsvx_", "dsbev_", "dsbevd_", "dsbevx_", "dsbgst_", "dsbgv_",
- "dsbgvd_", "dsbgvx_", "dsbtrd_", "dspcon_", "dspev_", "dspevd_", "dspevx_",
- "dspgst_", "dspgv_", "dspgvd_", "dspgvx_", "dsprfs_", "dspsv_", "dspsvx_",
- "dsptrd_", "dsptrf_", "dsptri_", "dsptrs_", "dstebz_", "dstedc_",
- "dstegr_", "dstemr_", "dsteqr_", "dstev_", "dstevd_", "dstevr_", "dstevx_",
- "dsycon_", "dsyev_", "dsyevd_", "dsyevr_", "dsyevx_", "dsygs2_", "dsygst_",
- "dsygv_", "dsygvd_", "dsygvx_", "dsyrfs_", "dsysv_", "dsysvx_", "dsytd2_",
- "dsytf2_", "dsytrd_", "dsytrf_", "dsytri_", "dsytrs_", "dtbcon_",
- "dtbrfs_", "dtbtrs_", "dtgevc_", "dtgsja_", "dtgsna_", "dtgsy2_",
- "dtgsyl_", "dtpcon_", "dtprfs_", "dtptri_", "dtptrs_", "dtrcon_",
- "dtrevc_", "dtrexc_", "dtrrfs_", "dtrsen_", "dtrsna_", "dtrsyl_",
- "dtrti2_", "dtrtri_", "dtrtrs_", "sbdsdc_", "sbdsqr_", "sdisna_",
- "sgbbrd_", "sgbcon_", "sgbrfs_", "sgbsvx_", "sgbtrs_", "sgebak_",
- "sgebal_", "sgecon_", "sgees_", "sgeesx_", "sgeev_", "sgeevx_", "sgels_",
- "sgerfs_", "sgesdd_", "sgesvd_", "sgesvx_", "sgetrs_", "sggbak_",
- "sggbal_", "sgges_", "sggesx_", "sggev_", "sggevx_", "sgghrd_", "sgtcon_",
- "sgtrfs_", "sgtsvx_", "sgttrs_", "shgeqz_", "shsein_", "shseqr_",
- "slacpy_", "slagtm_", "slalsd_", "slaqgb_", "slaqge_", "slaqsb_",
- "slaqsp_", "slaqsy_", "slarf_", "slarfb_", "slarft_", "slarfx_", "slarrc_",
- "slarrd_", "slarre_", "slarz_", "slarzb_", "slarzt_", "slascl_", "slasdq_",
- "slaset_", "slasr_", "slasrt_", "slasyf_", "slatbs_", "slatps_", "slatrd_",
- "slatrs_", "slauu2_", "slauum_", "sopgtr_", "sopmtr_", "sorgbr_",
- "sorgtr_", "sorm2l_", "sorm2r_", "sormbr_", "sormhr_", "sorml2_",
- "sormlq_", "sormql_", "sormqr_", "sormr2_", "sormr3_", "sormrq_",
- "sormrz_", "sormtr_", "spbcon_", "spbequ_", "spbrfs_", "spbstf_", "spbsv_",
- "spbsvx_", "spbtf2_", "spbtrf_", "spbtrs_", "spocon_", "sporfs_", "sposv_",
- "sposvx_", "spotf2_", "spotrf_", "spotri_", "spotrs_", "sppcon_",
- "sppequ_", "spprfs_", "sppsv_", "sppsvx_", "spptrf_", "spptri_", "spptrs_",
- "spteqr_", "sptsvx_", "ssbev_", "ssbevd_", "ssbevx_", "ssbgst_", "ssbgv_",
- "ssbgvd_", "ssbgvx_", "ssbtrd_", "sspcon_", "sspev_", "sspevd_", "sspevx_",
- "sspgst_", "sspgv_", "sspgvd_", "sspgvx_", "ssprfs_", "sspsv_", "sspsvx_",
- "ssptrd_", "ssptrf_", "ssptri_", "ssptrs_", "sstebz_", "sstedc_",
- "sstegr_", "sstemr_", "ssteqr_", "sstev_", "sstevd_", "sstevr_", "sstevx_",
- "ssycon_", "ssyev_", "ssyevd_", "ssyevr_", "ssyevx_", "ssygs2_", "ssygst_",
- "ssygv_", "ssygvd_", "ssygvx_", "ssyrfs_", "ssysv_", "ssysvx_", "ssytd2_",
- "ssytf2_", "ssytrd_", "ssytrf_", "ssytri_", "ssytrs_", "stbcon_",
- "stbrfs_", "stbtrs_", "stgevc_", "stgsja_", "stgsna_", "stgsy2_",
- "stgsyl_", "stpcon_", "stprfs_", "stptri_", "stptrs_", "strcon_",
- "strevc_", "strexc_", "strrfs_", "strsen_", "strsna_", "strsyl_",
- "strti2_", "strtri_", "strtrs_", "zbdsqr_", "zgbbrd_", "zgbcon_",
- "zgbrfs_", "zgbsvx_", "zgbtrs_", "zgebak_", "zgebal_", "zgecon_", "zgees_",
- "zgeesx_", "zgeev_", "zgeevx_", "zgels_", "zgerfs_", "zgesdd_", "zgesvd_",
- "zgesvx_", "zgetrs_", "zggbak_", "zggbal_", "zgges_", "zggesx_", "zggev_",
- "zggevx_", "zgghrd_", "zgtcon_", "zgtrfs_", "zgtsvx_", "zgttrs_", "zhbev_",
- "zhbevd_", "zhbevx_", "zhbgst_", "zhbgv_", "zhbgvd_", "zhbgvx_", "zhbtrd_",
- "zhecon_", "zheev_", "zheevd_", "zheevr_", "zheevx_", "zhegs2_", "zhegst_",
- "zhegv_", "zhegvd_", "zhegvx_", "zherfs_", "zhesv_", "zhesvx_", "zhetd2_",
- "zhetf2_", "zhetrd_", "zhetrf_", "zhetri_", "zhetrs_", "zhgeqz_",
- "zhpcon_", "zhpev_", "zhpevd_", "zhpevx_", "zhpgst_", "zhpgv_", "zhpgvd_",
- "zhpgvx_", "zhprfs_", "zhpsv_", "zhpsvx_", "zhptrd_", "zhptrf_", "zhptri_",
- "zhptrs_", "zhsein_", "zhseqr_", "zlacp2_", "zlacpy_", "zlagtm_",
- "zlahef_", "zlalsd_", "zlaqgb_", "zlaqge_", "zlaqhb_", "zlaqhe_",
- "zlaqhp_", "zlaqsb_", "zlaqsp_", "zlaqsy_", "zlarf_", "zlarfb_", "zlarft_",
- "zlarfx_", "zlarz_", "zlarzb_", "zlarzt_", "zlascl_", "zlaset_", "zlasr_",
- "zlasyf_", "zlatbs_", "zlatps_", "zlatrd_", "zlatrs_", "zlauu2_",
- "zlauum_", "zpbcon_", "zpbequ_", "zpbrfs_", "zpbstf_", "zpbsv_", "zpbsvx_",
- "zpbtf2_", "zpbtrf_", "zpbtrs_", "zpocon_", "zporfs_", "zposv_", "zposvx_",
- "zpotf2_", "zpotrf_", "zpotri_", "zpotrs_", "zppcon_", "zppequ_",
- "zpprfs_", "zppsv_", "zppsvx_", "zpptrf_", "zpptri_", "zpptrs_", "zpteqr_",
- "zptrfs_", "zptsvx_", "zpttrs_", "zspcon_", "zspmv_", "zspr_", "zsprfs_",
- "zspsv_", "zspsvx_", "zsptrf_", "zsptri_", "zsptrs_", "zstedc_", "zstegr_",
- "zstemr_", "zsteqr_", "zsycon_", "zsymv_", "zsyr_", "zsyrfs_", "zsysv_",
- "zsysvx_", "zsytf2_", "zsytrf_", "zsytri_", "zsytrs_", "ztbcon_",
- "ztbrfs_", "ztbtrs_", "ztgevc_", "ztgsja_", "ztgsna_", "ztgsy2_",
- "ztgsyl_", "ztpcon_", "ztprfs_", "ztptri_", "ztptrs_", "ztrcon_",
- "ztrevc_", "ztrexc_", "ztrrfs_", "ztrsen_", "ztrsna_", "ztrsyl_",
- "ztrti2_", "ztrtri_", "ztrtrs_", "zungbr_", "zungtr_", "zunm2l_",
- "zunm2r_", "zunmbr_", "zunmhr_", "zunml2_", "zunmlq_", "zunmql_",
- "zunmqr_", "zunmr2_", "zunmr3_", "zunmrq_", "zunmrz_", "zunmtr_",
- "zupgtr_", "zupmtr_", "ilaenv_",
- ]
- # fmt: on
- code = None
- with open(f2c_output_name, "r") as f:
- code = f.read()
- for cur_name in lapack_names:
- code = re.sub(rf"\b{cur_name}\b", "w" + cur_name, code)
- if code:
- with open(f2c_output_name, "w") as f:
- f.write(code)
+ func_name = sub.partition("(")[0].rpartition(" ")[2]
+ args_str = sub.partition("(")[2].partition(")")[0]
+ args = args_str.split(",")
+ types = []
+ for arg in args:
+ arg = arg.strip()
+ if "*" in arg:
+ type = "".join(arg.partition("*")[:-1])
+ else:
+ type = arg.partition(" ")[0]
+ types.append(type.strip())
+ return (func_name, types)
diff --git a/pyodide-build/pyodide_build/pywasmcross.py b/pyodide-build/pyodide_build/pywasmcross.py
--- a/pyodide-build/pyodide_build/pywasmcross.py
+++ b/pyodide-build/pyodide_build/pywasmcross.py
@@ -39,7 +39,7 @@
# absolute import is necessary as this file will be symlinked
# under tools
from pyodide_build import common
-from pyodide_build._f2c_fixes import fix_f2c_clapack_calls
+from pyodide_build._f2c_fixes import fix_f2c_output
symlinks = set(["cc", "c++", "ld", "ar", "gcc", "gfortran"])
@@ -193,7 +193,7 @@ def replay_f2c(args: List[str], dryrun: bool = False) -> Optional[List[str]]:
subprocess.check_call(
["f2c", os.path.basename(filename)], cwd=os.path.dirname(filename)
)
- fix_f2c_clapack_calls(arg[:-2] + ".c")
+ fix_f2c_output(arg[:-2] + ".c")
new_args.append(arg[:-2] + ".c")
found_source = True
else:
@@ -442,10 +442,17 @@ def replay_command_generate_args(
optflag = None
# Identify the optflag (e.g. -O3) in cflags/cxxflags/ldflags. Last one has
# priority.
- for arg in new_args[::-1]:
+ for arg in reversed(new_args):
if arg in optflags_valid:
optflag = arg
break
+ debugflag = None
+ # Identify the debug flag (e.g. -g0) in cflags/cxxflags/ldflags. Last one has
+ # priority.
+ for arg in reversed(new_args):
+ if arg.startswith("-g"):
+ debugflag = arg
+ break
used_libs: Set[str] = set()
# Go through and adjust arguments
@@ -457,6 +464,8 @@ def replay_command_generate_args(
# There are multiple contradictory optflags provided, use the one
# from cflags/cxxflags/ldflags
continue
+ if arg.startswith("-g") and debugflag is not None:
+ continue
if new_args[-1].startswith("-B") and "compiler_compat" in arg:
# conda uses custom compiler search paths with the compiler_compat folder.
# Ignore it.
| diff --git a/packages/scikit-learn/test_scikit-learn.py b/packages/scikit-learn/test_scikit-learn.py
--- a/packages/scikit-learn/test_scikit-learn.py
+++ b/packages/scikit-learn/test_scikit-learn.py
@@ -1,26 +1,48 @@
import pytest
+from conftest import selenium_context_manager
@pytest.mark.driver_timeout(40)
-def test_scikit_learn(selenium_standalone, request):
- selenium = selenium_standalone
- selenium.load_package("scikit-learn")
- assert (
- selenium.run(
- """
- import numpy as np
- import sklearn
- from sklearn.linear_model import LogisticRegression
+def test_scikit_learn(selenium_module_scope):
+ if selenium_module_scope.browser == "chrome":
+ pytest.xfail("Times out in chrome")
+ with selenium_context_manager(selenium_module_scope) as selenium:
+ selenium.load_package("scikit-learn")
+ assert (
+ selenium.run(
+ """
+ import numpy as np
+ import sklearn
+ from sklearn.linear_model import LogisticRegression
+
+ rng = np.random.RandomState(42)
+ X = rng.rand(100, 20)
+ y = rng.randint(5, size=100)
+
+ estimator = LogisticRegression(solver='liblinear')
+ estimator.fit(X, y)
+ print(estimator.predict(X))
+ estimator.score(X, y)
+ """
+ )
+ > 0
+ )
- rng = np.random.RandomState(42)
- X = rng.rand(100, 20)
- y = rng.randint(5, size=100)
- estimator = LogisticRegression(solver='liblinear')
- estimator.fit(X, y)
- print(estimator.predict(X))
- estimator.score(X, y)
- """
[email protected]_timeout(40)
+def test_logistic_regression(selenium_module_scope):
+ if selenium_module_scope.browser == "chrome":
+ pytest.xfail("Times out in chrome")
+ with selenium_context_manager(selenium_module_scope) as selenium:
+ selenium.load_package("scikit-learn")
+ selenium.run(
+ """
+ from sklearn.datasets import load_iris
+ from sklearn.linear_model import LogisticRegression
+ X, y = load_iris(return_X_y=True)
+ clf = LogisticRegression(random_state=0).fit(X, y)
+ print(clf.predict(X[:2, :]))
+ print(clf.predict_proba(X[:2, :]))
+ print(clf.score(X, y))
+ """
)
- > 0
- )
diff --git a/packages/scipy/patches/rename-_page_trend_test.patch b/packages/scipy/patches/rename-_page_trend_test.patch
new file mode 100644
--- /dev/null
+++ b/packages/scipy/patches/rename-_page_trend_test.patch
@@ -0,0 +1,34 @@
+From 6ffb169dbe331627d70da3f79f928057f1a4e163 Mon Sep 17 00:00:00 2001
+From: Hood Chatham <[email protected]>
+Date: Sun, 26 Dec 2021 07:34:40 -0800
+Subject: [PATCH] Rename _page_trend_test.py to prevent test unvendoring
+
+unvendor_tests will unvendor any file that ends in _test.py.
+Prevent that by changing the name of this file.
+
+---
+ scipy/stats/__init__.py | 2 +-
+ scipy/stats/{_page_trend_test.py => _page_trend_test_.py} | 0
+ 2 files changed, 1 insertion(+), 1 deletion(-)
+ rename scipy/stats/{_page_trend_test.py => _page_trend_test_.py} (100%)
+
+diff --git a/scipy/stats/__init__.py b/scipy/stats/__init__.py
+index f410f5410..ad702c38d 100644
+--- a/scipy/stats/__init__.py
++++ b/scipy/stats/__init__.py
+@@ -453,7 +453,7 @@ from ._bootstrap import bootstrap
+ from ._entropy import *
+ from ._hypotests import *
+ from ._rvs_sampling import rvs_ratio_uniforms, NumericalInverseHermite
+-from ._page_trend_test import page_trend_test
++from ._page_trend_test_ import page_trend_test
+ from ._mannwhitneyu import mannwhitneyu
+
+ __all__ = [s for s in dir() if not s.startswith("_")] # Remove dunders.
+diff --git a/scipy/stats/_page_trend_test.py b/scipy/stats/_page_trend_test_.py
+similarity index 100%
+rename from scipy/stats/_page_trend_test.py
+rename to scipy/stats/_page_trend_test_.py
+--
+2.25.1
+
diff --git a/packages/scipy/test_scipy.py b/packages/scipy/test_scipy.py
--- a/packages/scipy/test_scipy.py
+++ b/packages/scipy/test_scipy.py
@@ -1,37 +1,52 @@
-from textwrap import dedent
-
import pytest
+from conftest import selenium_context_manager
@pytest.mark.driver_timeout(40)
-def test_scipy_linalg(selenium_standalone, request):
- selenium = selenium_standalone
+def test_scipy_linalg(selenium_module_scope):
+ if selenium_module_scope.browser == "chrome":
+ pytest.xfail("Times out in chrome")
+ with selenium_context_manager(selenium_module_scope) as selenium:
+ selenium.load_package("scipy")
+ selenium.run(
+ r"""
+ import numpy as np
+ import scipy as sp
+ import scipy.linalg
+ from numpy.testing import assert_allclose
- selenium.load_package("scipy")
- cmd = dedent(
- r"""
- import numpy as np
- import scipy as sp
- import scipy.linalg
- from numpy.testing import assert_allclose
+ N = 10
+ X = np.random.RandomState(42).rand(N, N)
- N = 10
- X = np.random.RandomState(42).rand(N, N)
+ X_inv = scipy.linalg.inv(X)
- X_inv = scipy.linalg.inv(X)
+ res = X.dot(X_inv)
- res = X.dot(X_inv)
+ assert_allclose(res, np.identity(N),
+ rtol=1e-07, atol=1e-9)
+ """
+ )
- assert_allclose(res, np.identity(N),
- rtol=1e-07, atol=1e-9)
- """
- )
- selenium.run(cmd)
[email protected]_timeout(40)
+def test_brentq(selenium_module_scope):
+ with selenium_context_manager(selenium_module_scope) as selenium:
+ selenium.load_package("scipy")
+ selenium.run(
+ """
+ from scipy.optimize import brentq
+ brentq(lambda x: x, -1, 1)
+ """
+ )
@pytest.mark.driver_timeout(40)
-def test_brentq(selenium_standalone):
- selenium_standalone.load_package("scipy")
- selenium_standalone.run("from scipy.optimize import brentq")
- selenium_standalone.run("brentq(lambda x: x, -1, 1)")
+def test_dlamch(selenium_module_scope):
+ with selenium_context_manager(selenium_module_scope) as selenium:
+ selenium.load_package("scipy")
+ selenium.run(
+ """
+ from scipy.linalg import lapack
+ lapack.dlamch('Epsilon-Machine')
+ """
+ )
diff --git a/pyodide-build/pyodide_build/tests/test_f2c_fixes.py b/pyodide-build/pyodide_build/tests/test_f2c_fixes.py
deleted file mode 100644
--- a/pyodide-build/pyodide_build/tests/test_f2c_fixes.py
+++ /dev/null
@@ -1,27 +0,0 @@
-from textwrap import dedent
-
-from pyodide_build._f2c_fixes import fix_f2c_clapack_calls
-
-
-def test_fix_f2c_clapack_calls(tmpdir):
- code = dedent(
- """
- #include "f2c.h"
-
- int sgemv_(char *trans, integer *m, integer *n, real *alpha)
- {
- return 0;
- }
- """
- )
-
- source_file = tmpdir / "sgemv.c"
- with open(source_file, "w") as fh:
- fh.write(code)
-
- fix_f2c_clapack_calls(str(source_file))
-
- with open(source_file, "r") as fh:
- code_fixed = fh.read()
-
- assert code_fixed == code.replace("sgemv_", "wsgemv_")
| Update scipy version
Currently we are using scipy 0.17 from 2016 because [later versions could include f90](https://github.com/scipy/scipy/issues/2829#issuecomment-223764054) and we don't know how to compile it.
However, I quickly looked at scipy git log, and the mean edit frequency of for fortran files in scipy is 6 years, so even if f90 is now accepted now, there was no active modernization effort from f77. Which means that later scipy versions could potentially be built, and we could update it. Latest version would be ideal so that we could contribute fixes upstream if needed.
Though the problem is that until dynamic linking is not resolved https://github.com/iodide-project/pyodide/issues/240, if newer versions add additional modules that use LAPACK it will increase the binary size which is not desirable.
| The current status on this is that it's possible to update scipy, but later versions use BLAS/LAPACK in a larger number of modules, which would translate to even bigger package sizes until we address it via https://github.com/iodide-project/pyodide/issues/240 or https://github.com/iodide-project/pyodide/issues/874
Any updates on when scipy will be updated? Looking to run a function (https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.cossin.html) which is available from scipy 1.5.0
Same thing. I'm lacking functions from new scipy =)
To move this forward, we are working on compiling SciPy with LFortran: https://gitlab.com/lfortran/lfortran/-/issues/582. We can parse most of the files, some require either minor modifications or (preferably) a few fixes in our fixed-form parser.
I figured the best way would be to get some simplest self-contained part of SciPy working first. It seems the simplest part might be `stats/statlib`: https://gitlab.com/lfortran/lfortran/-/issues/581. It looks like we can parse those files and we are now working on the semantics. It looks like we are not far off, but we still have to implement a few things to get all semantics working. Once that works, we'll see what needs to be fixed in our LLVM backend, probably a few things also.
Once we can generate a correct LLVM IR code for those subroutines, what would be the next step? I assume we select the Wasm backend in the LLVM to generate Wasm. LFortran's runtime library is mostly written in Fortran, so that should translate nicely. There are some things that reach into C (such as file IO), but I think those are not needed for SciPy.
What about things like math functions such as `sin(x)`? We have two versions: one in pure Fortran, which is performance first, the accuracy is roughly 1e-15 relative accuracy and the argument range is roughly `|x| < 1e10`. If `x` is outside that range, it will return incorrect results. These functions are meant for Release mode, when speed is needed. Then we have accuracy first version which are much slower, but those simply call into `libc`, so depend on C. I assume we can add special handling for Wasm into our LLVM backend to call some accurate `sin(x)` implementation that is available for Wasm from somewhere. What is the best approach here?
This is obviously a longer term project, but we will eventually get there. Important is to start now, get simple things working all the way, and then eventually be able to compile all of SciPy. If there is anybody who wants to help us with this work, please let us know. We are always looking for new contributors. I created a general issue to add a Wasm backend to LFortran here: https://gitlab.com/lfortran/lfortran/-/issues/583.
Great to hear that @certik ! Having alternative compiler for scipy would be very useful, already on x86_64.
> Once we can generate a correct LLVM IR code for those subroutines, what would be the next step? I assume we select the Wasm backend in the LLVM to generate Wasm.
I assume so but I'm not familiar with LLVM. Looks like @StarGate01 managed to make that part work in https://github.com/pyodide/pyodide/issues/184#issuecomment-616948624
> What about things like math functions such as sin(x)? .. What is the best approach here?
Well the scipy test suite would ideally need to pass on x86_64 (or have its tolerances adjusted). So that will tell how critical it would be. In Pyodide however we currently still have a number of failures, in particular due to an issue with float underflow/overflow handling https://github.com/pyodide/pyodide/issues/156
> Important is to start now, get simple things working all the way, and then eventually be able to compile all of SciPy.
Absolutely, thanks for starting this discussion!
To give a bit more context. Even once there is a compiler that is able to compile Fortran for WASM (related https://github.com/pyodide/pyodide/issues/184) Pyodide has other challenges for building scipy. Due to function signature mismatches in dynamic linking that apparently work in non WASM environment but that make WASM crash arbitrarily, we currently [have lots of patches](https://github.com/pyodide/pyodide/tree/main/packages/scipy/patches) to try to work around that thanks to work done by @joemarshall . When that makes sense those would need to be contributed to scipy in the long term. At least that's an issue with scipy + CBLAS + CLAPACK + libf2c. It might not be a problem with scipy + BLAS + LAPACK (in Fortran), not sure. Also ideally we would need to switch to a more optimized BLAS #227 in the future.
Also congrats for the progress made on the LFortran project!
Also to give a general status update on this issue the latest WIP attempt to update is in https://github.com/pyodide/pyodide/pull/1293 but it requires lots of time and effort to fix all the signature mismatches and port patches.
@rth, awesome, thanks a lot for the tips. It looks like a good first step for us is to:
* Compile the `stats/statlib` with just the regular CPU LLVM backends (x86, arm) and make sure scipy tests pass.
Although perhaps minor for your efforts, that would be a huge milestone for us. I think that's a necessary milestone for any further work and we are not that far off. After that works, we can then work on two things in parallel:
* Implement more Fortran features to make it work for other SciPy modules also
* Get `stats/statlib` working in the browser by reusing the f2py SciPy interface to it and creating a simple Python project using it.
The last point would ensure all the pieces are working with regards to WebAssembly and Pyodide, and people can start using LFortran for simpler Fortran/Python projects. Then to get SciPy working, "all" we need to do is to implement the rest of the Fortran features that other SciPy modules use.
We are looking for more contributors to help us get over the finish line sooner. For example a huge help would be to iron out the WebAssembly and f2py specific interface issues with regards to LFortran, any potential user can help us with that. We can start this work today (update: we literally started today: https://gitlab.com/lfortran/lfortran/-/merge_requests/1385) on just some simple Fortran code and get it working. If anybody reading this is interested in using Fortran in the browser, please get in touch, I am happy to get you started. | 2021-12-23T16:00:22 |
pyodide/pyodide | 2,099 | pyodide__pyodide-2099 | [
"2057"
] | cced9017bd6cf40fdfb150c9ce436292cf326d82 | diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -127,7 +127,7 @@
epub_exclude_files = ["search.html"]
if "READTHEDOCS" in os.environ:
- env = {"PYODIDE_BASE_URL": "https://cdn.jsdelivr.net/pyodide/dev/full/"}
+ env = {"PYODIDE_BASE_URL": "https://cdn.jsdelivr.net/pyodide/v0.19.0/full/"}
os.makedirs("_build/html", exist_ok=True)
res = subprocess.check_output(
["make", "-C", "..", "docs/_build/html/console.html"],
diff --git a/src/py/pyodide/__init__.py b/src/py/pyodide/__init__.py
--- a/src/py/pyodide/__init__.py
+++ b/src/py/pyodide/__init__.py
@@ -40,7 +40,7 @@
asyncio.set_event_loop_policy(WebLoopPolicy())
-__version__ = "0.19.0dev0"
+__version__ = "0.19.0"
__all__ = [
"open_url",
| 0.19 release
Opening an issue to track the 0.19 release.
I think we are fairly ready to make the 0.19.0 release. Ideally, the following would be nice to do before the 0.19 alpha release (but not critical),
- update browser versions used for testing https://github.com/pyodide/pyodide/pull/1952
- which should hopefully unblock emscripten update https://github.com/pyodide/pyodide/pull/2035
- run the update `meta.yaml` script for all pure python packages
For instance, maybe we could plan to have a
- A release candidate 0.19.0rc0: 2021/12/22 or 23 -> to make it easier still use the main branch up to the final release
- A final release 0.19.0: 2021/12/30
?
cc @hoodmane
| We have some new APIs that are related to downloading files/scripts and importing them.
- pyodide.http.pyfetch (#1865)
- pyodide.unpackArchive (#2018)
- pyodide.pyimport (#1944)
These APIs can benefit many users, so I think it would be great to update documentation of using those APIs before releasing 0.19.
Maybe we could have two new sections about:
- File download (ongoing PR?: #1861)
- Importing python script
> These APIs can benefit many users, so I think it would be great to update documentation of using those APIs before releasing 0.19.
Yes, definitely. Thanks for mentioning it @ryanking13 !
The 0.19.0a1 was released by @hoodmane yesterday, but we probably still address some of the above topics before the release, and certainly the documentation.
I think we are good to go for the 0.19.0 release. The only remaining thing is to write a short blog post with major highlights from this release. If you has bandwidth @hoodmane @ryanking13 feel free to start it, otherwise I can look into it tomorrow.
Okay, I will try to get to it.
Maybe I will update the roadmap a bit too.
Okay I made a very rough sketch of some release notes:
https://github.com/pyodide/pyodide-blog/pull/18 | 2022-01-10T23:15:58 |
|
pyodide/pyodide | 2,175 | pyodide__pyodide-2175 | [
"2166"
] | f784e5e3d0d8ad1a8e8fecbe61b22ee1e4070133 | diff --git a/packages/micropip/src/micropip/_micropip.py b/packages/micropip/src/micropip/_micropip.py
--- a/packages/micropip/src/micropip/_micropip.py
+++ b/packages/micropip/src/micropip/_micropip.py
@@ -295,7 +295,20 @@ async def add_requirement(
async def add_wheel(self, name, wheel, version, extras, ctx, transaction):
transaction["locked"][name] = PackageMetadata(name=name, version=version)
- wheel_bytes = await fetch_bytes(wheel["url"])
+
+ try:
+ wheel_bytes = await fetch_bytes(wheel["url"])
+ except Exception as e:
+ if wheel["url"].startswith("https://files.pythonhosted.org/"):
+ raise e
+ else:
+ raise ValueError(
+ f"Couldn't fetch wheel from '{wheel['url']}'."
+ "One common reason for this is when the server blocks "
+ "Cross-Origin Resource Sharing (CORS)."
+ "Check if the server is sending the correct 'Access-Control-Allow-Origin' header."
+ ) from e
+
wheel["wheel_bytes"] = wheel_bytes
with ZipFile(io.BytesIO(wheel_bytes)) as zip_file: # type: ignore
| diff --git a/packages/micropip/test_micropip.py b/packages/micropip/test_micropip.py
--- a/packages/micropip/test_micropip.py
+++ b/packages/micropip/test_micropip.py
@@ -345,6 +345,22 @@ def test_install_keep_going(monkeypatch):
)
+def test_fetch_wheel_fail(monkeypatch):
+ pytest.importorskip("packaging")
+ from micropip import _micropip
+
+ def _mock_fetch_bytes(*args, **kwargs):
+ raise Exception("Failed to fetch")
+
+ monkeypatch.setattr(_micropip, "fetch_bytes", _mock_fetch_bytes)
+
+ msg = "Access-Control-Allow-Origin"
+ with pytest.raises(ValueError, match=msg):
+ asyncio.get_event_loop().run_until_complete(
+ _micropip.install("htps://x.com/xxx-1.0.0-py3-none-any.whl")
+ )
+
+
def test_list_pypi_package(monkeypatch):
pytest.importorskip("packaging")
from micropip import _micropip
| micropip.install not allowing for other wheel URLs other than PyPI
## π Bug
Hi. `pyodide` is great, so thanks to the dev team for this amazing work. I'm marking this as a "bug" as the docs seem to indicate a different behavior then I'm seeing, but apologies if you think this should be a GitHub Discussion instead (and please feel free to move it).
In the `pyodide` [`v0.19.0` docs it mentions for `micropip.install`](https://pyodide.org/en/stable/usage/api/micropip-api.html#module-micropip) that the `requirements` arg can be
> A requirement or list of requirements to install. Each requirement is a string, which should be either a package name or URL to a wheel:
> * If the requirement ends in .whl it will be interpreted as a URL. The file must be a wheel named in compliance with the [PEP 427 naming convention](https://www.python.org/dev/peps/pep-0427/#file-format).
> * If the requirement does not end in .whl, it will interpreted as the name of a package. A package by this name must either be present in the Pyodide repository at indexURL <globalThis.loadPyodide> or on PyPI
The docs also mention "other URLS" here
https://github.com/pyodide/pyodide/blob/950c1dcbd271f90faea0d205a7c95fcd40710a69/docs/usage/loading-packages.md?plain=1#L9-L10
This doesn't seem to be the case for at least TestPyPI. I have wheels for my team's [pure-Python project](https://pypi.org/project/pyhf) there that are built with `build` in the same workflow that we use to upload to PyPI. The installation [from PyPI](https://pypi.org/project/pyhf/) and [from PyPI with a wheel URL](https://files.pythonhosted.org/packages/a4/49/23b30e056d807d8ff0c5de9c419c6becd560be1a676a2349f34727fe9aca/pyhf-0.6.3-py3-none-any.whl) works fine, but the installation [from a wheel URL from TestPyPI](https://test-files.pythonhosted.org/packages/4d/54/a749a9ed7840800dabf2f5eff340dcbd32524a21b673f35f8a7582658c31/pyhf-0.6.4.dev77-py3-none-any.whl) fails with
```
JsException: TypeError: Failed to fetch
```
### To Reproduce
<!-- Minimal code example to reproduce the bug. -->
Running on the [Pyodide terminal emulator](https://pyodide.org/en/stable/console.html) you have made nicely available:
```python
>>> import micropip
>>> await micropip.install("https://test-files.pythonhosted.org/packages/4d/54/a749a9ed7840800dabf2f5eff340dcbd32524a21b673f3
5f8a7582658c31/pyhf-0.6.4.dev77-py3-none-any.whl")
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/lib/python3.9/asyncio/futures.py", line 284, in __await__
yield self # This tells Task to wait for completion.
File "/lib/python3.9/asyncio/tasks.py", line 328, in __wakeup
future.result()
File "/lib/python3.9/asyncio/futures.py", line 201, in result
raise self._exception
File "/lib/python3.9/asyncio/tasks.py", line 258, in __step
result = coro.throw(exc)
File "/lib/python3.9/site-packages/micropip/_micropip.py", line 185, in install
transaction = await self.gather_requirements(requirements, ctx, keep_going)
File "/lib/python3.9/site-packages/micropip/_micropip.py", line 175, in gather_requirements
await gather(*requirement_promises)
File "/lib/python3.9/asyncio/futures.py", line 284, in __await__
yield self # This tells Task to wait for completion.
File "/lib/python3.9/asyncio/tasks.py", line 328, in __wakeup
future.result()
File "/lib/python3.9/asyncio/futures.py", line 201, in result
raise self._exception
File "/lib/python3.9/asyncio/tasks.py", line 258, in __step
result = coro.throw(exc)
File "/lib/python3.9/site-packages/micropip/_micropip.py", line 249, in add_requirement
await self.add_wheel(name, wheel, version, (), ctx, transaction)
File "/lib/python3.9/site-packages/micropip/_micropip.py", line 299, in add_wheel
wheel_bytes = await fetch_bytes(wheel["url"])
File "/lib/python3.9/site-packages/micropip/_micropip.py", line 57, in fetch_bytes
return await (await pyfetch(url, **kwargs)).bytes()
File "/lib/python3.9/site-packages/pyodide/http.py", line 232, in pyfetch
url, await _jsfetch(url, to_js(kwargs, dict_converter=Object.fromEntries))
File "/lib/python3.9/asyncio/futures.py", line 284, in __await__
yield self # This tells Task to wait for completion.
File "/lib/python3.9/asyncio/tasks.py", line 328, in __wakeup
future.result()
File "/lib/python3.9/asyncio/futures.py", line 201, in result
raise self._exception
JsException: TypeError: Failed to fetch
```
Here's a replite example as well:
* Failure to install `pyhf-0.6.4.dev77-py3-none-any.whl` from TestPyPI from wheel URL [![lite-badge]][test_pypi_fail]
[lite-badge]: https://jupyterlite.rtfd.io/en/latest/_static/badge.svg
[test_pypi_fail]: https://replite.vercel.app/retro/consoles/?toolbar=1&kernel=python&code=import%20micropip%0Aawait%20micropip.install%28%22https%3A//test-files.pythonhosted.org/packages/4d/54/a749a9ed7840800dabf2f5eff340dcbd32524a21b673f35f8a7582658c31/pyhf-0.6.4.dev77-py3-none-any.whl%22%29
### Expected behavior
<!-- FILL IN -->
`pyodide` should install the wheel from TestPyPI as the name is valid (built by `build`) and is a valid URL that ends in `.whl`.
Examples of working behavior:
* Install `pyhf` `v0.6.3` successfully from PyPI [![lite-badge]][pypi_name]
* Install `pyhf` `v0.6.3` successfully from PyPI from wheel URL [![lite-badge]][pypi_wheel_url]
[pypi_name]: https://replite.vercel.app/retro/consoles/?toolbar=1&kernel=python&code=import%20micropip%0Aawait%20micropip.install%28%22pyhf%22%29%0Aimport%20pyhf
[pypi_wheel_url]: https://replite.vercel.app/retro/consoles/?toolbar=1&kernel=python&code=import%20micropip%0Aawait%20micropip.install%28%22https%3A//files.pythonhosted.org/packages/a4/49/23b30e056d807d8ff0c5de9c419c6becd560be1a676a2349f34727fe9aca/pyhf-0.6.3-py3-none-any.whl%22%29%0Aimport%20pyhf
### Environment
- Pyodide Version: 1.9.0
- Browser version: Google Chrome 98.0.4758.80
- Any other relevant information:
<!-- If you are building Pyodide by yourself, please also include these information: -->
<!--
- Commit hash of Pyodide git repository:
- Build environment<!--(e.g. Ubuntu 18.04, pyodide/pyodide-env:19 docker)- ->:
-->
### Additional context
None
| This is a problem with headers. The server must set the header `access-control-allow-origin: *` on the response. `files.python-hosted.org` sets this header but `test-files.python-hosted.org` does not.
Probably we should update the docs to indicate this. If you are serving the wheel from the same origin as the rest of the website, you don't need this header.
> This is a problem with headers. The server must set the header `access-control-allow-origin: *` on the response. `files.python-hosted.org` sets this header but `test-files.python-hosted.org` does not.
> Probably we should update the docs to indicate this
Thanks for the fast response and explanation! I agree that if things could be updated with a small note on this in the docs that would be helpful. | 2022-02-14T07:47:37 |
pyodide/pyodide | 2,234 | pyodide__pyodide-2234 | [
"2088"
] | 36d0e15300875d834a6b37b6199934f2d194e858 | diff --git a/packages/matplotlib/src/html5_canvas_backend.py b/packages/matplotlib/src/html5_canvas_backend.py
--- a/packages/matplotlib/src/html5_canvas_backend.py
+++ b/packages/matplotlib/src/html5_canvas_backend.py
@@ -355,7 +355,8 @@ def _get_font(self, prop):
def get_text_width_height_descent(self, s, prop, ismath):
if ismath:
image, d = self.mathtext_parser.parse(s, self.dpi, prop)
- w, h = image.get_width(), image.get_height()
+ image_arr = np.asarray(image)
+ h, w = image_arr.shape
else:
font, _ = self._get_font(prop)
font.set_text(s, 0.0, flags=LOAD_NO_HINTING)
| diff --git a/packages/matplotlib/test_matplotlib.py b/packages/matplotlib/test_matplotlib.py
--- a/packages/matplotlib/test_matplotlib.py
+++ b/packages/matplotlib/test_matplotlib.py
@@ -83,7 +83,7 @@ def test_svg(selenium):
selenium.run("fd = io.BytesIO()")
selenium.run("plt.savefig(fd, format='svg')")
content = selenium.run("fd.getvalue().decode('utf8')")
- assert len(content) == 16283
+ assert len(content) == 14998
assert content.startswith("<?xml")
diff --git a/packages/test_packages_common.py b/packages/test_packages_common.py
--- a/packages/test_packages_common.py
+++ b/packages/test_packages_common.py
@@ -33,7 +33,7 @@ def registered_packages_meta():
UNSUPPORTED_PACKAGES: dict[str, list[str]] = {
"chrome": [],
"firefox": [],
- "node": [],
+ "node": ["cmyt"],
}
if "CI" in os.environ:
UNSUPPORTED_PACKAGES["chrome"].extend(["statsmodels"])
| Update yt
| We hit this issue: https://github.com/yt-project/yt/issues/3477
So I guess this has to wait on yt version 4.0.2. | 2022-03-01T22:03:04 |
pyodide/pyodide | 2,263 | pyodide__pyodide-2263 | [
"761"
] | 74e6809da1ed609ea6f4c671b2f9196d3ce5caa9 | diff --git a/pyodide-build/pyodide_build/buildpkg.py b/pyodide-build/pyodide_build/buildpkg.py
--- a/pyodide-build/pyodide_build/buildpkg.py
+++ b/pyodide-build/pyodide_build/buildpkg.py
@@ -136,6 +136,9 @@ def get_bash_runner():
"HOSTINSTALLDIR",
"PYMAJOR",
"PYMINOR",
+ "CPYTHONBUILD",
+ "STDLIB_MODULE_CFLAGS",
+ "SIDE_MODULE_LDFLAGS",
]
} | {"PYODIDE": "1"}
if "PYODIDE_JOBS" in os.environ:
diff --git a/pyodide-build/pyodide_build/common.py b/pyodide-build/pyodide_build/common.py
--- a/pyodide-build/pyodide_build/common.py
+++ b/pyodide-build/pyodide_build/common.py
@@ -3,7 +3,36 @@
from pathlib import Path
from typing import Optional
-UNVENDORED_STDLIB_MODULES = ["test", "distutils"]
+UNVENDORED_STDLIB_MODULES = {"test", "distutils"}
+
+ALWAYS_PACKAGES = {
+ "pyparsing",
+ "packaging",
+ "micropip",
+}
+
+CORE_PACKAGES = {
+ "micropip",
+ "pyparsing",
+ "pytz",
+ "packaging",
+ "Jinja2",
+ "regex",
+ "fpcast-test",
+ "sharedlib-test-py",
+ "cpp-exceptions-test",
+ "ssl",
+}
+
+CORE_SCIPY_PACKAGES = {
+ "numpy",
+ "scipy",
+ "pandas",
+ "matplotlib",
+ "scikit-learn",
+ "joblib",
+ "pytest",
+}
def _parse_package_subset(query: Optional[str]) -> set[str]:
@@ -32,35 +61,15 @@ def _parse_package_subset(query: Optional[str]) -> set[str]:
if query is None:
query = "core"
- core_packages = {
- "micropip",
- "pyparsing",
- "pytz",
- "packaging",
- "Jinja2",
- "regex",
- "fpcast-test",
- "sharedlib-test-py",
- "cpp-exceptions-test",
- }
- core_scipy_packages = {
- "numpy",
- "scipy",
- "pandas",
- "matplotlib",
- "scikit-learn",
- "joblib",
- "pytest",
- }
packages = {el.strip() for el in query.split(",")}
- packages.update(["pyparsing", "packaging", "micropip"])
+ packages.update(ALWAYS_PACKAGES)
packages.update(UNVENDORED_STDLIB_MODULES)
# handle meta-packages
if "core" in packages:
- packages |= core_packages
+ packages |= CORE_PACKAGES
packages.discard("core")
if "min-scipy-stack" in packages:
- packages |= core_packages | core_scipy_packages
+ packages |= CORE_PACKAGES | CORE_SCIPY_PACKAGES
packages.discard("min-scipy-stack")
# Hack to deal with the circular dependence between soupsieve and
| diff --git a/packages/cryptography/test_cryptography.py b/packages/cryptography/test_cryptography.py
new file mode 100644
--- /dev/null
+++ b/packages/cryptography/test_cryptography.py
@@ -0,0 +1,224 @@
+from hypothesis import HealthCheck, given, settings
+from hypothesis.strategies import binary, integers
+
+from conftest import selenium_context_manager
+from pyodide_build.testing import run_in_pyodide
+
+
+@run_in_pyodide(packages=["cryptography"])
+def test_cryptography():
+ import base64
+
+ from cryptography.fernet import Fernet, MultiFernet
+
+ f1 = Fernet(base64.urlsafe_b64encode(b"\x00" * 32))
+ f2 = Fernet(base64.urlsafe_b64encode(b"\x01" * 32))
+ f = MultiFernet([f1, f2])
+
+ assert f1.decrypt(f.encrypt(b"abc")) == b"abc"
+
+
+@run_in_pyodide(packages=["cryptography", "pytest"])
+def test_der_reader_basic():
+ import pytest
+ from cryptography.hazmat._der import DERReader
+
+ reader = DERReader(b"123456789")
+ assert reader.read_byte() == ord(b"1")
+ assert reader.read_bytes(1).tobytes() == b"2"
+ assert reader.read_bytes(4).tobytes() == b"3456"
+
+ with pytest.raises(ValueError):
+ reader.read_bytes(4)
+
+ assert reader.read_bytes(3).tobytes() == b"789"
+
+ # The input is now empty.
+ with pytest.raises(ValueError):
+ reader.read_bytes(1)
+ with pytest.raises(ValueError):
+ reader.read_byte()
+
+
+@run_in_pyodide(packages=["cryptography", "pytest"])
+def test_der():
+ import pytest
+ from cryptography.hazmat._der import (
+ INTEGER,
+ NULL,
+ OCTET_STRING,
+ SEQUENCE,
+ DERReader,
+ encode_der,
+ encode_der_integer,
+ )
+
+ # This input is the following structure, using
+ # https://github.com/google/der-ascii
+ #
+ # SEQUENCE {
+ # SEQUENCE {
+ # NULL {}
+ # INTEGER { 42 }
+ # OCTET_STRING { "hello" }
+ # }
+ # }
+ der = b"\x30\x0e\x30\x0c\x05\x00\x02\x01\x2a\x04\x05\x68\x65\x6c\x6c\x6f"
+ reader = DERReader(der)
+ with pytest.raises(ValueError):
+ reader.check_empty()
+
+ with pytest.raises(ValueError):
+ with reader:
+ pass
+
+ with pytest.raises(ZeroDivisionError):
+ with DERReader(der):
+ raise ZeroDivisionError
+
+ # Parse the outer element.
+ outer = reader.read_element(SEQUENCE)
+ reader.check_empty()
+ assert outer.data.tobytes() == der[2:]
+
+ # Parse the outer element with read_any_element.
+ reader = DERReader(der)
+ tag, outer2 = reader.read_any_element()
+ reader.check_empty()
+ assert tag == SEQUENCE
+ assert outer2.data.tobytes() == der[2:]
+
+ # Parse the outer element with read_single_element.
+ outer3 = DERReader(der).read_single_element(SEQUENCE)
+ assert outer3.data.tobytes() == der[2:]
+
+ # read_single_element rejects trailing data.
+ with pytest.raises(ValueError):
+ DERReader(der + der).read_single_element(SEQUENCE)
+
+ # Continue parsing the structure.
+ inner = outer.read_element(SEQUENCE)
+ outer.check_empty()
+
+ # Parsing a missing optional element should work.
+ assert inner.read_optional_element(INTEGER) is None
+
+ null = inner.read_element(NULL)
+ null.check_empty()
+
+ # Parsing a present optional element should work.
+ integer = inner.read_optional_element(INTEGER)
+ assert integer.as_integer() == 42
+
+ octet_string = inner.read_element(OCTET_STRING)
+ assert octet_string.data.tobytes() == b"hello"
+
+ # Parsing a missing optional element should work when the input is empty.
+ inner.check_empty()
+ assert inner.read_optional_element(INTEGER) is None
+
+ # Re-encode the same structure.
+ der2 = encode_der(
+ SEQUENCE,
+ encode_der(
+ SEQUENCE,
+ encode_der(NULL),
+ encode_der(INTEGER, encode_der_integer(42)),
+ encode_der(OCTET_STRING, b"hello"),
+ ),
+ )
+ assert der2 == der
+
+
+@run_in_pyodide(packages=["cryptography"])
+def test_der_lengths():
+
+ from cryptography.hazmat._der import OCTET_STRING, DERReader, encode_der
+
+ for [length, header] in [
+ # Single-byte lengths.
+ (0, b"\x04\x00"),
+ (1, b"\x04\x01"),
+ (2, b"\x04\x02"),
+ (127, b"\x04\x7f"),
+ # Long-form lengths.
+ (128, b"\x04\x81\x80"),
+ (129, b"\x04\x81\x81"),
+ (255, b"\x04\x81\xff"),
+ (0x100, b"\x04\x82\x01\x00"),
+ (0x101, b"\x04\x82\x01\x01"),
+ (0xFFFF, b"\x04\x82\xff\xff"),
+ (0x10000, b"\x04\x83\x01\x00\x00"),
+ ]:
+ body = length * b"a"
+ der = header + body
+
+ reader = DERReader(der)
+ element = reader.read_element(OCTET_STRING)
+ reader.check_empty()
+ assert element.data.tobytes() == body
+
+ assert encode_der(OCTET_STRING, body) == der
+
+
+@settings(suppress_health_check=[HealthCheck.too_slow], deadline=None)
+@given(data=binary())
+def test_fernet(selenium_module_scope, data):
+ sbytes = list(data)
+ with selenium_context_manager(selenium_module_scope) as selenium:
+ selenium.load_package("cryptography")
+ selenium.run(
+ f"""
+ from cryptography.fernet import Fernet
+ data = bytes({sbytes})
+ f = Fernet(Fernet.generate_key())
+ ct = f.encrypt(data)
+ assert f.decrypt(ct) == data
+ """
+ )
+
+
+@settings(suppress_health_check=[HealthCheck.too_slow], deadline=None)
+@given(block_size=integers(min_value=1, max_value=255), data=binary())
+def test_pkcs7(selenium_module_scope, block_size, data):
+ sbytes = list(data)
+ with selenium_context_manager(selenium_module_scope) as selenium:
+ selenium.load_package("cryptography")
+ selenium.run(
+ f"""
+ from cryptography.hazmat.primitives.padding import ANSIX923, PKCS7
+ block_size = {block_size}
+ data = bytes({sbytes})
+ # Generate in [1, 31] so we can easily get block_size in bits by
+ # multiplying by 8.
+ p = PKCS7(block_size=block_size * 8)
+ padder = p.padder()
+ unpadder = p.unpadder()
+
+ padded = padder.update(data) + padder.finalize()
+
+ assert unpadder.update(padded) + unpadder.finalize() == data
+ """
+ )
+
+
+@settings(suppress_health_check=[HealthCheck.too_slow], deadline=None)
+@given(block_size=integers(min_value=1, max_value=255), data=binary())
+def test_ansix923(selenium_module_scope, block_size, data):
+ sbytes = list(data)
+ with selenium_context_manager(selenium_module_scope) as selenium:
+ selenium.load_package("cryptography")
+ selenium.run(
+ f"""
+ from cryptography.hazmat.primitives.padding import ANSIX923, PKCS7
+ block_size = {block_size}
+ data = bytes({sbytes})
+ a = ANSIX923(block_size=block_size * 8)
+ padder = a.padder()
+ unpadder = a.unpadder()
+
+ padded = padder.update(data) + padder.finalize()
+
+ assert unpadder.update(padded) + unpadder.finalize() == data
+ """
+ )
diff --git a/packages/ssl/test_ssl.py b/packages/ssl/test_ssl.py
new file mode 100644
--- /dev/null
+++ b/packages/ssl/test_ssl.py
@@ -0,0 +1,32 @@
+from pyodide_build.testing import run_in_pyodide
+
+
+@run_in_pyodide(packages=["test", "ssl"])
+def test_ssl():
+ import platform
+ import unittest
+ import unittest.mock
+ from test import libregrtest
+
+ platform.platform(aliased=True)
+ name = "test_ssl"
+ ignore_tests = [
+ "*test_context_custom_class*",
+ "*ThreadedTests*",
+ "*ocket*",
+ "test_verify_flags",
+ "test_subclass",
+ "test_lib_reason",
+ ]
+
+ try:
+ with unittest.mock.patch(
+ "test.support.socket_helper.bind_port",
+ side_effect=unittest.SkipTest("nope!"),
+ ):
+ libregrtest.main(
+ [name], ignore_tests=ignore_tests, verbose=True, verbose3=True
+ )
+ except SystemExit as e:
+ if e.code != 0:
+ raise RuntimeError(f"Failed with code: {e.code}")
diff --git a/pyodide-build/pyodide_build/tests/test_common.py b/pyodide-build/pyodide_build/tests/test_common.py
--- a/pyodide-build/pyodide_build/tests/test_common.py
+++ b/pyodide-build/pyodide_build/tests/test_common.py
@@ -1,4 +1,8 @@
from pyodide_build.common import (
+ ALWAYS_PACKAGES,
+ CORE_PACKAGES,
+ CORE_SCIPY_PACKAGES,
+ UNVENDORED_STDLIB_MODULES,
_parse_package_subset,
get_make_environment_vars,
get_make_flag,
@@ -6,77 +10,54 @@
def test_parse_package_subset():
- # micropip is always included
- assert _parse_package_subset("numpy,pandas") == {
- "pyparsing",
- "packaging",
- "micropip",
- "numpy",
- "pandas",
- "test",
- "distutils",
- }
+ assert (
+ _parse_package_subset("numpy,pandas")
+ == {
+ "numpy",
+ "pandas",
+ }
+ | UNVENDORED_STDLIB_MODULES
+ | ALWAYS_PACKAGES
+ )
# duplicates are removed
- assert _parse_package_subset("numpy,numpy") == {
- "pyparsing",
- "packaging",
- "micropip",
- "numpy",
- "test",
- "distutils",
- }
+ assert (
+ _parse_package_subset("numpy,numpy")
+ == {
+ "numpy",
+ }
+ | UNVENDORED_STDLIB_MODULES
+ | ALWAYS_PACKAGES
+ )
# no empty package name included, spaces are handled
- assert _parse_package_subset("x, a, b, c ,,, d,,") == {
- "pyparsing",
- "packaging",
- "micropip",
- "test",
- "distutils",
- "x",
- "a",
- "b",
- "c",
- "d",
- }
+ assert (
+ _parse_package_subset("x, a, b, c ,,, d,,")
+ == {
+ "x",
+ "a",
+ "b",
+ "c",
+ "d",
+ }
+ | UNVENDORED_STDLIB_MODULES
+ | ALWAYS_PACKAGES
+ )
- assert _parse_package_subset("core") == {
- "pyparsing",
- "packaging",
- "pytz",
- "Jinja2",
- "micropip",
- "regex",
- "fpcast-test",
- "test",
- "distutils",
- "sharedlib-test-py",
- "cpp-exceptions-test",
- }
+ assert (
+ _parse_package_subset("core")
+ == CORE_PACKAGES | UNVENDORED_STDLIB_MODULES | ALWAYS_PACKAGES
+ )
# by default core packages are built
assert _parse_package_subset(None) == _parse_package_subset("core")
- assert _parse_package_subset("min-scipy-stack") == {
- "pyparsing",
- "packaging",
- "pytz",
- "Jinja2",
- "micropip",
- "regex",
- "fpcast-test",
- "numpy",
- "scipy",
- "pandas",
- "matplotlib",
- "scikit-learn",
- "joblib",
- "pytest",
- "test",
- "distutils",
- "sharedlib-test-py",
- "cpp-exceptions-test",
- }
+ assert (
+ _parse_package_subset("min-scipy-stack")
+ == CORE_SCIPY_PACKAGES
+ | CORE_PACKAGES
+ | UNVENDORED_STDLIB_MODULES
+ | ALWAYS_PACKAGES
+ )
# reserved key words can be combined with other packages
assert _parse_package_subset("core, unknown") == _parse_package_subset("core") | {
"unknown"
diff --git a/src/tests/test_core_python.py b/src/tests/test_core_python.py
--- a/src/tests/test_core_python.py
+++ b/src/tests/test_core_python.py
@@ -30,7 +30,7 @@ def test_cpython_core(python_test, selenium, request):
else:
pytest.xfail('known failure with code "{}"'.format(",".join(error_flags)))
- selenium.load_package(UNVENDORED_STDLIB_MODULES)
+ selenium.load_package(list(UNVENDORED_STDLIB_MODULES))
try:
selenium.run(
"""
| Cryptography Support
Pyodide is genius. It will certainly influence and broaden the world impact of all these data science tools and Python in so many new contexts.
Any plan on adding 'cryptography' to the list of supported packages?
It seems many cool modules depend on it for fundamental encryption and security. All applications that would be awesome in the browser with Pyodide.
e.g.
```console
>>> micropip.install('ccxt')
Couldn't find a pure Python 3 wheel for 'cryptography'
>>> micropip.install('alias')
Couldn't find a pure Python 3 wheel for 'cryptography'
>>> micropip.install('keycache')
Couldn't find a pure Python 3 wheel for 'cryptography'
```
| The issue with adding the cryptography package is that it uses CFFI for C-extensions and we currently don't support CFFI (https://github.com/iodide-project/pyodide/issues/681#issuecomment-639413545).
Is this a limitation of pyodide or would we be able to add this in with a PR?
Same applies for [`pynacl`](https://github.com/pyca/pynacl/) package. It took some time to figure out that it uses CFFI (for libsodium), and pyodide is not able to load dynamic libraries for that reason. It would be amazing to be able to use such packages.
Same for google/tink btw:
```
ValueError: Couldn't find a pure Python 3 wheel for 'tink'
```
Note that we do support cffi since #1656 was merged.
I meant the https://pypi.org/project/cffi package not the C FFI in general. I don't know what would be the practical difference in making it work as compared to ctypes, but we should try adding it as a first step then.
I see. Well libffi support was a blocker. Looking at the cffi github there's a lot of stuff that I don't understand at all. It might well be quite a complex task still.
Okay well cffi only asks for the libffi library, so there is a chance it might just work if we try to build it.
I am trying to build it and so far I am running into the problem that it is trying to link with `-lffi` but `libffi` can only be linked into the main module. Somehow need to modify config to filter out that flag or convince it not to issue it. I guess for now I can just modify pywasmcross. With that stripped out it at least builds correctly. It imports successfully too.
We're presumably going to start having trouble when cffi tries to compile things, since we don't have a C compiler.
Okay well even with the cffi blocker cleared, there is a bigger problem which is that `cryptography` has a rust/pyo3 module. Building rust modules seems to be a pretty daunting task.
It's by no means a long term solution but we could try building cryptography 3.3.2 (from Feb 7, 2021) which was before they added the rust/pyo3 dependency.
That is probably a good idea. Another concern about rust is that it generates very large build artifacts: a Rust Python module that defines a single API that returns a constant compiles to a 3.4Mb .so, though obviously the size is something we can concern ourselves with we figure out how to actually build stuff...
Perhaps I'll open a draft PR with some of my partial progress toward building rust modules. I might be getting close, a lot of problems come up though.
Next issue with cryptography is lots of `six`.
> some of my partial progress toward building rust modules.
Even if it works, and personally I like playing with Rust, I would be a bit hesitant to add it to the default build. People have already enough trouble building Pyodide as is, without adding the Rust dependency. Or at least we would need to revisit the idea that all included packages are built by default.
> a Rust Python module that defines a single API that returns a constant compiles to a 3.4Mb .so
Hmm, interesting. I was under mistaken impression that Rust would make smaller wasm binaries given all the work being done in that area. But even https://rustpython.github.io/demo/ is still 10MB compressed, which is larger than us.
> at least we would need to revisit the idea that all included packages are built by default.
Yeah, might be helpful to divide them up by capabilities required.
> I was under mistaken impression that Rust would make smaller wasm binaries
I think they compress it very well, it just comes with TONS of extra stuff in it by default. For instance, all of the fancy panic messages. Their `print!` macro generates a whole pile of support code. Generics rapidly increase code size. In C there just aren't any constructs that generate enormous amounts of code behind the scenes. I think if you strip out all of this stuff, it might make a smaller binary but it is not clear how to do it. It also might not be fair because I made no effort at all to shrink this.
In any case, Rust seems much harder to port than C.
Okay so next issue: we probably need to port openSSL for this.
There should be a "pure" python implementation. Did you look for such alternative?
Why? It's even possible to implement PGP in Javascript.
See https://openpgpjs.org/
How about this route?
https://github.com/ricmoo/pyaes
@planctron Yeah that would be a reasonable approach. One would have to implement [custom backend interfaces](https://cryptography.io/en/latest/hazmat/backends/interfaces/) with pgpjs. I most likely won't do this myself but if someone is interested in contributing that would be welcome.
> @planctron Yeah that would be a reasonable approach. ..
My idea was to use **another library**, which is pure python, like _pyaes_.
Can you not use _**pyaes**_ to encrypt instead of _crypography_?
@planctron In the original post, the rational given for wanting the cryptography package is that a large number of other packages are dependent on it (ccxt, alias, and keycache are given as examples).
Would it be possible to polyfill this module using Subtle.Crypto ? ( https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto )
Perhaps this has already been considered.
> Would it be possible to polyfill this module using Subtle.Crypto ?
Thanks for the proposal! For one thing, all those methods are async which currently would be difficult to call synchronously from Python. Also doing experimental things could work for some less critical packages but doing that for cryptography is probably not a great idea. That being said people can indeed use SubtleCrypto from Python (or make a package on top of it), as long as it's not presented as an identical package to the cryptography library IMO.
Ok, good points. Brain storming; Would it be possible to wrap those async methods in promises and then use 'await' inside the wrapper methods? ( To make them sync.) Or compile the original cryptography library within the webassembly dependency but map the missing strong entropy to use subtle.crypto instead of whatever is causing the build to not work in webassembly?
I think the best way forward is still to build the original cryptography library version 3.4.8 (2021-08-24) before rust was added as a build requirement. To do that one would need to build openssl as a dynamic library https://github.com/pyodide/pyodide/issues/2124. As far as I can tell that's the only remaining blocker.
I think I can get cryptography to work. They actually introduced the rust dependency in v3.4.0 so we will need to use v3.3.2 until we can resolve https://github.com/rust-lang/rust/issues/93015 or hack our way around it.
Well I successfully built and imported cryptography v3.3.2, but there are some dynamic linking errors that need to be debugged. | 2022-03-09T21:43:43 |
pyodide/pyodide | 2,271 | pyodide__pyodide-2271 | [
"2226"
] | 18e664afb16de754ec85ef2c46fffcec1e9f0450 | diff --git a/packages/matplotlib/src/html5_canvas_backend.py b/packages/matplotlib/src/html5_canvas_backend.py
--- a/packages/matplotlib/src/html5_canvas_backend.py
+++ b/packages/matplotlib/src/html5_canvas_backend.py
@@ -21,7 +21,12 @@
from PIL import Image
from PIL.PngImagePlugin import PngInfo
-from js import FontFace, ImageData, XMLHttpRequest, document, window
+try:
+ from js import FontFace, ImageData, document
+except ImportError:
+ raise ImportError(
+ "html5_canvas_backend is only supported in the browser in the main thread"
+ )
from pyodide import create_proxy
_capstyle_d = {"projecting": "square", "butt": "butt", "round": "round"}
@@ -38,20 +43,11 @@ class FigureCanvasHTMLCanvas(FigureCanvasWasm):
def __init__(self, *args, **kwargs):
FigureCanvasWasm.__init__(self, *args, **kwargs)
- # A count of the fonts loaded. To support testing
- window.font_counter = 0
-
def create_root_element(self):
root_element = document.createElement("div")
document.body.appendChild(root_element)
return root_element
- def get_dpi_ratio(self, context):
- if hasattr(window, "testing"):
- return 2.0
- else:
- return super().get_dpi_ratio(context)
-
def draw(self):
# Render the figure using custom renderer
self._idle_scheduled = True
@@ -66,6 +62,8 @@ def draw(self):
ctx = canvas.getContext("2d")
renderer = RendererHTMLCanvas(ctx, width, height, self.figure.dpi, self)
self.figure.draw(renderer)
+ except Exception as e:
+ raise RuntimeError("Rendering failed") from e
finally:
self.figure.dpi = orig_dpi
self._idle_scheduled = False
@@ -84,28 +82,6 @@ def get_pixel_data(self):
canvas_base64 = base64.b64decode(img_URL)
return np.asarray(Image.open(io.BytesIO(canvas_base64)))
- def compare_reference_image(self, url, threshold):
- canvas_data = self.get_pixel_data()
-
- def _get_url_async(url, threshold):
- req = XMLHttpRequest.new()
- req.open("GET", url, True)
- req.responseType = "arraybuffer"
-
- def callback(e):
- if req.readyState == 4:
- ref_data = np.asarray(Image.open(io.BytesIO(req.response.to_py())))
- mean_deviation = np.mean(np.abs(canvas_data - ref_data))
- window.deviation = mean_deviation
-
- # converts a `numpy._bool` type explicitly to `bool`
- window.result = bool(mean_deviation <= threshold)
-
- req.onreadystatechange = callback
- req.send(None)
-
- _get_url_async(url, threshold)
-
def print_png(
self, filename_or_obj, *args, metadata=None, pil_kwargs=None, **kwargs
):
@@ -236,6 +212,9 @@ def __init__(self, ctx, width, height, dpi, fig):
self.fontd = maxdict(50)
self.mathtext_parser = MathTextParser("bitmap")
+ # Keep the state of fontfaces that are loading
+ self.fonts_loading = {}
+
def new_gc(self):
return GraphicsContextHTMLCanvas(renderer=self)
@@ -381,15 +360,20 @@ def _draw_math_text(self, gc, x, y, s, prop, angle):
if angle != 0:
self.ctx.restore()
- def draw_text(self, gc, x, y, s, prop, angle, ismath=False, mtext=None):
- def _load_font_into_web(loaded_face):
- document.fonts.add(loaded_face.result())
- window.font_counter += 1
- self.fig.draw()
+ def load_font_into_web(self, loaded_face, font_url):
+ fontface = loaded_face.result()
+ document.fonts.add(fontface)
+ self.fonts_loading.pop(font_url, None)
+
+ # Redraw figure after font has loaded
+ self.fig.draw()
+ return fontface
+ def draw_text(self, gc, x, y, s, prop, angle, ismath=False, mtext=None):
if ismath:
self._draw_math_text(gc, x, y, s, prop, angle)
return
+
angle = math.radians(angle)
width, height, descent = self.get_text_width_height_descent(s, prop, ismath)
x -= math.sin(angle) * descent
@@ -412,7 +396,12 @@ def _load_font_into_web(loaded_face):
if font_face_arguments not in _font_set:
_font_set.add(font_face_arguments)
f = FontFace.new(*font_face_arguments)
- f.load().add_done_callback(_load_font_into_web)
+
+ font_url = font_face_arguments[1]
+ self.fonts_loading[font_url] = f
+ f.load().add_done_callback(
+ lambda result: self.load_font_into_web(result, font_url)
+ )
font_property_string = "{} {} {:.3g}px {}, {}".format(
prop.get_style(),
| diff --git a/packages/matplotlib/test_matplotlib.py b/packages/matplotlib/test_matplotlib.py
--- a/packages/matplotlib/test_matplotlib.py
+++ b/packages/matplotlib/test_matplotlib.py
@@ -1,57 +1,83 @@
-import os
+import base64
import pathlib
+import textwrap
import pytest
-from selenium.webdriver.support.wait import WebDriverWait
-ROOT_PATH = pathlib.Path(__file__).resolve().parents[2]
+REFERENCE_IMAGES_PATH = pathlib.Path(__file__).parent / "reference-images"
-TEST_PATH = ROOT_PATH / "packages" / "matplotlib" / "reference-images"
-TARGET_PATH = ROOT_PATH / "build" / "matplotlib-test"
-
-def get_backend(selenium_standalone):
- selenium = selenium_standalone
- return selenium.run(
- """
- import matplotlib
- matplotlib.get_backend()
+def run_with_resolve(selenium, code):
+ selenium.run_js(
+ f"""
+ try {{
+ let promise = new Promise((resolve) => self.resolve = resolve);
+ pyodide.runPython({code!r});
+ await promise;
+ }} finally {{
+ delete self.resolve;
+ }}
"""
)
-def get_canvas_data(selenium, prefix):
- import base64
+def patch_font_loading_and_dpi(target_font=""):
+ """Monkey-patches font loading and dpi to allow testing"""
+ return textwrap.dedent(
+ f"""from matplotlib.backends.html5_canvas_backend import RendererHTMLCanvas
+ from matplotlib.backends.html5_canvas_backend import FigureCanvasHTMLCanvas
+ FigureCanvasHTMLCanvas.get_dpi_ratio = lambda self, context: 2.0
+ load_font_into_web = RendererHTMLCanvas.load_font_into_web
+ def load_font_into_web_wrapper(self, loaded_font, font_url, orig_function=load_font_into_web):
+ fontface = orig_function(self, loaded_font, font_url)
+
+ target_font = {target_font!r}
+ if not target_font or target_font == fontface.family:
+ try:
+ from js import resolve
+ resolve()
+ except Exception as e:
+ raise ValueError("unable to resolve") from e
+
+ RendererHTMLCanvas.load_font_into_web = load_font_into_web_wrapper
+ """
+ )
- canvas_tag_property = "//canvas[starts-with(@id, 'matplotlib')]"
- canvas_element = selenium.driver.find_element_by_xpath(canvas_tag_property)
- img_script = "return arguments[0].toDataURL('image/png').substring(21)"
- canvas_base64 = selenium.driver.execute_script(img_script, canvas_element)
- canvas_png = base64.b64decode(canvas_base64)
- with open(rf"{TEST_PATH}/{prefix}-{selenium.browser}.png", "wb") as f:
- f.write(canvas_png)
+def save_canvas_data(selenium, output_path):
+ canvas_data = selenium.run(
+ """
+ import base64
+ canvas = plt.gcf().canvas.get_element("canvas")
+ canvas_data = canvas.toDataURL("image/png")[21:]
+ canvas_data
+ """
+ )
-def check_comparison(selenium, prefix, num_fonts):
- font_wait = WebDriverWait(selenium.driver, timeout=350)
- font_wait.until(FontsLoaded(num_fonts))
+ canvas_png = base64.b64decode(canvas_data)
+ output_path.write_bytes(canvas_png)
- # If we don't have a reference image, write one to disk
- if not os.path.isfile(f"{TEST_PATH}/{prefix}-{selenium.browser}.png"):
- get_canvas_data(selenium, prefix)
- selenium.run(
+def compare_with_reference_image(selenium, reference_image):
+ reference_image_encoded = base64.b64encode(reference_image.read_bytes())
+ deviation = selenium.run(
f"""
- url = 'http://{selenium.server_hostname}:{selenium.server_port}/matplotlib-test/{prefix}-{selenium.browser}.png'
- threshold = 0
- plt.gcf().canvas.compare_reference_image(url, threshold)
- """
+ import io
+ import base64
+ import numpy as np
+ from PIL import Image
+ canvas_data = plt.gcf().canvas.get_pixel_data()
+ ref_data = np.asarray(Image.open(io.BytesIO(base64.b64decode({reference_image_encoded!r}))))
+
+ deviation = np.mean(np.abs(canvas_data - ref_data))
+ float(deviation)
+ """
)
- wait = WebDriverWait(selenium.driver, timeout=350)
- wait.until(ResultLoaded())
- assert selenium.run("window.font_counter") == num_fonts
- assert selenium.run("window.deviation") == 0
- assert selenium.run("window.result") is True
+
+ # Note: uncomment this line if you want to save the output canvas image (for comparison).
+ # save_canvas_data(selenium, reference_image.with_name(f"output-{reference_image.name}"))
+
+ return deviation == 0.0
@pytest.mark.skip_refcount_check
@@ -76,13 +102,17 @@ def test_svg(selenium):
if selenium.browser == "node":
pytest.xfail("No supported matplotlib backends on node")
selenium.load_package("matplotlib")
- selenium.run("from matplotlib import pyplot as plt")
- selenium.run("plt.figure(); pass")
- selenium.run("x = plt.plot([1,2,3])")
- selenium.run("import io")
- selenium.run("fd = io.BytesIO()")
- selenium.run("plt.savefig(fd, format='svg')")
- content = selenium.run("fd.getvalue().decode('utf8')")
+ content = selenium.run(
+ """
+ from matplotlib import pyplot as plt
+ import io
+ plt.figure()
+ x = plt.plot([1,2,3])
+ fd = io.BytesIO()
+ plt.savefig(fd, format='svg')
+ fd.getvalue().decode('utf8')
+ """
+ )
assert len(content) == 14998
assert content.startswith("<?xml")
@@ -92,12 +122,16 @@ def test_pdf(selenium):
if selenium.browser == "node":
pytest.xfail("No supported matplotlib backends on node")
selenium.load_package("matplotlib")
- selenium.run("from matplotlib import pyplot as plt")
- selenium.run("plt.figure(); pass")
- selenium.run("x = plt.plot([1,2,3])")
- selenium.run("import io")
- selenium.run("fd = io.BytesIO()")
- selenium.run("plt.savefig(fd, format='pdf')")
+ selenium.run(
+ """
+ from matplotlib import pyplot as plt
+ plt.figure()
+ x = plt.plot([1,2,3])
+ import io
+ fd = io.BytesIO()
+ plt.savefig(fd, format='pdf')
+ """
+ )
def test_font_manager(selenium):
@@ -138,22 +172,17 @@ def test_rendering(selenium_standalone):
selenium = selenium_standalone
if selenium.browser == "node":
pytest.xfail("No supported matplotlib backends on node")
+
selenium.load_package("matplotlib")
- if get_backend(selenium) == "module://matplotlib.backends.wasm_backend":
- print(
- "test supported only for html5 canvas backend. wasm backend is currently used. switching to html5 canvas backend"
- )
- TARGET_PATH.symlink_to(TEST_PATH, True)
- try:
- selenium.get_driver().set_script_timeout(7000)
- selenium.run(
- """
+ selenium.set_script_timeout(60)
+ run_with_resolve(
+ selenium,
+ f"""
+ {patch_font_loading_and_dpi()}
import matplotlib
matplotlib.use("module://matplotlib.backends.html5_canvas_backend")
- from js import window
- window.testing = True
- from matplotlib import pyplot as plt
import numpy as np
+ from matplotlib import pyplot as plt
t = np.arange(0.0, 2.0, 0.01)
s = 1 + np.sin(2 * np.pi * t)
plt.figure()
@@ -161,12 +190,12 @@ def test_rendering(selenium_standalone):
plt.plot(t, t)
plt.grid(True)
plt.show()
- """
- )
+ """,
+ )
- check_comparison(selenium, "canvas", 1)
- finally:
- TARGET_PATH.unlink()
+ assert compare_with_reference_image(
+ selenium, REFERENCE_IMAGES_PATH / f"canvas-{selenium.browser}.png"
+ )
@pytest.mark.skip_refcount_check
@@ -175,20 +204,15 @@ def test_draw_image(selenium_standalone):
selenium = selenium_standalone
if selenium.browser == "node":
pytest.xfail("No supported matplotlib backends on node")
+
selenium.load_package("matplotlib")
- if get_backend(selenium) == "module://matplotlib.backends.wasm_backend":
- print(
- "test supported only for html5 canvas backend. wasm backend is currently used. switching to html5 canvas backend"
- )
- TARGET_PATH.symlink_to(TEST_PATH, True)
- try:
- selenium.get_driver().set_script_timeout(7000)
- selenium.run(
- """
+ selenium.set_script_timeout(60)
+ run_with_resolve(
+ selenium,
+ f"""
+ {patch_font_loading_and_dpi()}
import matplotlib
matplotlib.use("module://matplotlib.backends.html5_canvas_backend")
- from js import window
- window.testing = True
import numpy as np
import matplotlib.cm as cm
import matplotlib.pyplot as plt
@@ -206,12 +230,12 @@ def test_draw_image(selenium_standalone):
origin='lower', extent=[-3, 3, -3, 3],
vmax=abs(Z).max(), vmin=-abs(Z).max())
plt.show()
- """
- )
+ """,
+ )
- check_comparison(selenium, "canvas-image", 1)
- finally:
- TARGET_PATH.unlink()
+ assert compare_with_reference_image(
+ selenium, REFERENCE_IMAGES_PATH / f"canvas-image-{selenium.browser}.png"
+ )
@pytest.mark.skip_refcount_check
@@ -220,21 +244,15 @@ def test_draw_image_affine_transform(selenium_standalone):
selenium = selenium_standalone
if selenium.browser == "node":
pytest.xfail("No supported matplotlib backends on node")
+
selenium.load_package("matplotlib")
- if get_backend(selenium) == "module://matplotlib.backends.wasm_backend":
- print(
- "test supported only for html5 canvas backend. wasm backend is currently used. switching to html5 canvas backend"
- )
- TARGET_PATH.symlink_to(TEST_PATH, True)
- try:
- selenium.get_driver().set_script_timeout(7000)
- selenium.run(
- """
+ selenium.set_script_timeout(60)
+ run_with_resolve(
+ selenium,
+ f"""
+ {patch_font_loading_and_dpi()}
import matplotlib
matplotlib.use("module://matplotlib.backends.html5_canvas_backend")
- from js import window
- window.testing = True
-
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.transforms as mtransforms
@@ -281,12 +299,12 @@ def do_plot(ax, Z, transform):
rotate_deg(30).skew_deg(30, 15).scale(-1, .5).translate(.5, -1))
plt.show()
- """
- )
+ """,
+ )
- check_comparison(selenium, "canvas-image-affine", 1)
- finally:
- TARGET_PATH.unlink()
+ assert compare_with_reference_image(
+ selenium, REFERENCE_IMAGES_PATH / f"canvas-image-affine-{selenium.browser}.png"
+ )
@pytest.mark.skip_refcount_check
@@ -295,22 +313,18 @@ def test_draw_text_rotated(selenium_standalone):
selenium = selenium_standalone
if selenium.browser == "node":
pytest.xfail("No supported matplotlib backends on node")
- if selenium.browser == "chrome":
- pytest.xfail(f"high recursion limit not supported for {selenium.browser}")
+
selenium.load_package("matplotlib")
- if get_backend(selenium) == "module://matplotlib.backends.wasm_backend":
- print(
- "test supported only for html5 canvas backend. wasm backend is currently used. switching to html5 canvas backend"
- )
- TARGET_PATH.symlink_to(TEST_PATH, True)
- try:
- selenium.get_driver().set_script_timeout(7000)
- selenium.run(
- """
+ selenium.set_script_timeout(60)
+ run_with_resolve(
+ selenium,
+ f"""
+ {patch_font_loading_and_dpi()}
+ import os
+ os.environ["TESTING_MATPLOTLIB"] = "1"
+
import matplotlib
matplotlib.use("module://matplotlib.backends.html5_canvas_backend")
- from js import window
- window.testing = True
import matplotlib.pyplot as plt
from matplotlib.dates import (
YEARLY, DateFormatter,
@@ -340,41 +354,36 @@ def test_draw_text_rotated(selenium_standalone):
plt.setp(labels, rotation=30, fontsize=10)
plt.show()
- """
- )
+ """,
+ )
- check_comparison(selenium, "canvas-text-rotated", 1)
- finally:
- TARGET_PATH.unlink()
+ assert compare_with_reference_image(
+ selenium, REFERENCE_IMAGES_PATH / f"canvas-text-rotated-{selenium.browser}.png"
+ )
@pytest.mark.skip_refcount_check
@pytest.mark.skip_pyproxy_check
def test_draw_math_text(selenium_standalone):
selenium = selenium_standalone
- xfail_msg = None
if selenium.browser == "node":
- xfail_msg = "No supported matplotlib backends on node"
- elif selenium.browser == "chrome":
- xfail_msg = f"high recursion limit not supported for {selenium.browser}"
- elif selenium.browser == "firefox":
- xfail_msg = "Fails sometimes with selenium.common.exceptions.TimeoutException"
- if xfail_msg:
- pytest.xfail(xfail_msg)
+ pytest.xfail("No supported matplotlib backends on node")
+
selenium.load_package("matplotlib")
- if get_backend(selenium) == "module://matplotlib.backends.wasm_backend":
- print(
- "test supported only for html5 canvas backend. wasm backend is currently used. switching to html5 canvas backend"
- )
- TARGET_PATH.symlink_to(TEST_PATH, True)
- try:
- selenium.get_driver().set_script_timeout(7000)
- selenium.run(
- r"""
+ selenium.set_script_timeout(60)
+ run_with_resolve(
+ selenium,
+ f"""
+ {patch_font_loading_and_dpi()}
+ """
+ + r"""
+ import os
+ os.environ["TESTING_MATPLOTLIB"] = "1"
+
import matplotlib
matplotlib.use("module://matplotlib.backends.html5_canvas_backend")
from js import window
- window.testing = True
+ window.testingMatplotlib = True
import matplotlib.pyplot as plt
import sys
import re
@@ -472,12 +481,12 @@ def doall():
print(i, s)
plt.show()
doall()
- """
- )
+ """,
+ )
- check_comparison(selenium, "canvas-math-text", 1)
- finally:
- TARGET_PATH.unlink()
+ assert compare_with_reference_image(
+ selenium, REFERENCE_IMAGES_PATH / f"canvas-math-text-{selenium.browser}.png"
+ )
@pytest.mark.skip_refcount_check
@@ -486,39 +495,35 @@ def test_custom_font_text(selenium_standalone):
selenium = selenium_standalone
if selenium.browser == "node":
pytest.xfail("No supported matplotlib backends on node")
+
selenium.load_package("matplotlib")
- if get_backend(selenium) == "module://matplotlib.backends.wasm_backend":
- print(
- "test supported only for html5 canvas backend. wasm backend is currently used. switching to html5 canvas backend"
- )
- TARGET_PATH.symlink_to(TEST_PATH, True)
- try:
- selenium.get_driver().set_script_timeout(7000)
- selenium.run(
- """
- import matplotlib
- matplotlib.use("module://matplotlib.backends.html5_canvas_backend")
- from js import window
- window.testing = True
- import matplotlib.pyplot as plt
- import numpy as np
-
- f = {'fontname': 'cmsy10'}
-
- t = np.arange(0.0, 2.0, 0.01)
- s = 1 + np.sin(2 * np.pi * t)
- plt.figure()
- plt.title('A simple Sine Curve', **f)
- plt.plot(t, s, linewidth=1.0, marker=11)
- plt.plot(t, t)
- plt.grid(True)
- plt.show()
- """
- )
+ selenium.set_script_timeout(60)
+ run_with_resolve(
+ selenium,
+ f"""
+ {patch_font_loading_and_dpi(target_font='cmsy10')}
+ import matplotlib
+ matplotlib.use("module://matplotlib.backends.html5_canvas_backend")
+ import matplotlib.pyplot as plt
+ import numpy as np
+
+ f = {{'fontname': 'cmsy10'}}
+
+ t = np.arange(0.0, 2.0, 0.01)
+ s = 1 + np.sin(2 * np.pi * t)
+ plt.figure()
+ plt.title('A simple Sine Curve', **f)
+ plt.plot(t, s, linewidth=1.0, marker=11)
+ plt.plot(t, t)
+ plt.grid(True)
+ plt.show()
+ """,
+ )
- check_comparison(selenium, "canvas-custom-font-text", 2)
- finally:
- TARGET_PATH.unlink()
+ assert compare_with_reference_image(
+ selenium,
+ REFERENCE_IMAGES_PATH / f"canvas-custom-font-text-{selenium.browser}.png",
+ )
@pytest.mark.skip_refcount_check
@@ -527,47 +532,41 @@ def test_zoom_on_polar_plot(selenium_standalone):
selenium = selenium_standalone
if selenium.browser == "node":
pytest.xfail("No supported matplotlib backends on node")
+
selenium.load_package("matplotlib")
- if get_backend(selenium) == "module://matplotlib.backends.wasm_backend":
- print(
- "test supported only for html5 canvas backend. wasm backend is currently used. switching to html5 canvas backend"
- )
- TARGET_PATH.symlink_to(TEST_PATH, True)
- try:
- selenium.get_driver().set_script_timeout(7000)
- selenium.run(
- """
- import matplotlib
- matplotlib.use("module://matplotlib.backends.html5_canvas_backend")
- from js import window
- window.testing = True
-
- import numpy as np
- import matplotlib.pyplot as plt
- np.random.seed(42)
-
- # Compute pie slices
- N = 20
- theta = np.linspace(0.0, 2 * np.pi, N, endpoint=False)
- radii = 10 * np.random.rand(N)
- width = np.pi / 4 * np.random.rand(N)
-
- ax = plt.subplot(111, projection='polar')
- bars = ax.bar(theta, radii, width=width, bottom=0.0)
-
- # Use custom colors and opacity
- for r, bar in zip(radii, bars):
- bar.set_facecolor(plt.cm.viridis(r / 10.))
- bar.set_alpha(0.5)
-
- ax.set_rlim([0,5])
- plt.show()
- """
- )
+ selenium.set_script_timeout(60)
+ run_with_resolve(
+ selenium,
+ f"""
+ {patch_font_loading_and_dpi()}
+ import matplotlib
+ matplotlib.use("module://matplotlib.backends.html5_canvas_backend")
+ import numpy as np
+ import matplotlib.pyplot as plt
+ np.random.seed(42)
+
+ # Compute pie slices
+ N = 20
+ theta = np.linspace(0.0, 2 * np.pi, N, endpoint=False)
+ radii = 10 * np.random.rand(N)
+ width = np.pi / 4 * np.random.rand(N)
- check_comparison(selenium, "canvas-polar-zoom", 1)
- finally:
- TARGET_PATH.unlink()
+ ax = plt.subplot(111, projection='polar')
+ bars = ax.bar(theta, radii, width=width, bottom=0.0)
+
+ # Use custom colors and opacity
+ for r, bar in zip(radii, bars):
+ bar.set_facecolor(plt.cm.viridis(r / 10.))
+ bar.set_alpha(0.5)
+
+ ax.set_rlim([0,5])
+ plt.show()
+ """,
+ )
+
+ assert compare_with_reference_image(
+ selenium, REFERENCE_IMAGES_PATH / f"canvas-polar-zoom-{selenium.browser}.png"
+ )
@pytest.mark.skip_refcount_check
@@ -576,21 +575,15 @@ def test_transparency(selenium_standalone):
selenium = selenium_standalone
if selenium.browser == "node":
pytest.xfail("No supported matplotlib backends on node")
+
selenium.load_package("matplotlib")
- if get_backend(selenium) == "module://matplotlib.backends.wasm_backend":
- print(
- "test supported only for html5 canvas backend. wasm backend is currently used. switching to html5 canvas backend"
- )
- TARGET_PATH.symlink_to(TEST_PATH, True)
- try:
- selenium.get_driver().set_script_timeout(7000)
- selenium.run(
- """
+ selenium.set_script_timeout(60)
+ run_with_resolve(
+ selenium,
+ f"""
+ {patch_font_loading_and_dpi()}
import matplotlib
matplotlib.use("module://matplotlib.backends.html5_canvas_backend")
- from js import window
- window.testing = True
-
import numpy as np
np.random.seed(19680801)
import matplotlib.pyplot as plt
@@ -607,24 +600,9 @@ def test_transparency(selenium_standalone):
ax.grid(True)
plt.show()
- """
- )
-
- check_comparison(selenium, "canvas-transparency", 1)
- finally:
- TARGET_PATH.unlink()
-
-
-class ResultLoaded:
- def __call__(self, driver):
- inited = driver.execute_script("return window.result")
- return inited is not None
-
-
-class FontsLoaded:
- def __init__(self, num_fonts):
- self.num_fonts = num_fonts
+ """,
+ )
- def __call__(self, driver):
- font_inited = driver.execute_script("return window.font_counter")
- return font_inited is not None and font_inited == self.num_fonts
+ assert compare_with_reference_image(
+ selenium, REFERENCE_IMAGES_PATH / f"canvas-transparency-{selenium.browser}.png"
+ )
| Matplotlib test is failing in CI
## π Bug
We are experiencing (occasional) failure in CI on [testing matplotlib](https://app.circleci.com/pipelines/github/hoodmane/pyodide/2186/workflows/e951d6f7-9949-41e0-ab7c-f7083de030a0/jobs/22549).
What I've found so far is that matplotlib test checks [whether the font is loaded](https://github.com/pyodide/pyodide/blob/8d00451142d5ddb0237595065773b1f7bf290d33/packages/matplotlib/test_matplotlib.py#L623-L625), but the [callback which updates the font_counter](https://github.com/pyodide/pyodide/blob/8d00451142d5ddb0237595065773b1f7bf290d33/packages/matplotlib/src/html5_canvas_backend.py#L414) is not executed properly. I am trying to figure this out but strangely I can't reproduce this behavior locally.
Note that matplotlib tests were disabled(xfailed) for a long time, and re-enabled (https://github.com/pyodide/pyodide/pull/2151) recently.
_Update:_ the failing test is xfailed now (#2243)
| 2022-03-12T05:03:52 |
|
pyodide/pyodide | 2,292 | pyodide__pyodide-2292 | [
"2290"
] | abba842fd9526119567058858a496e00154aaff4 | diff --git a/conftest.py b/conftest.py
--- a/conftest.py
+++ b/conftest.py
@@ -155,7 +155,7 @@ def javascript_setup(self):
def load_pyodide(self):
self.run_js(
"""
- let pyodide = await loadPyodide({ indexURL : './', fullStdLib: false, jsglobals : self });
+ let pyodide = await loadPyodide({ fullStdLib: false, jsglobals : self });
self.pyodide = pyodide;
globalThis.pyodide = pyodide;
pyodide._api.inTestHoist = true; // improve some error messages for tests
| diff --git a/src/js/test/conftest.js b/src/js/test/conftest.js
--- a/src/js/test/conftest.js
+++ b/src/js/test/conftest.js
@@ -3,5 +3,5 @@ import path from "path";
globalThis.path = path;
before(async () => {
- globalThis.pyodide = await loadPyodide({ indexURL: "../../build/" });
+ globalThis.pyodide = await loadPyodide();
});
| Allow specifying the version of Pyodide just once
## π Feature
It seems that as things stand, to use Pyodide, you have to include the right version of the script (e.g., `<script src="https://cdn.jsdelivr.net/pyodide/v0.19.1/full/pyodide.js"></script>`) and request the right directory for `loadPyodide` (e.g., `loadPyodide({indexURL : "https://cdn.jsdelivr.net/pyodide/v0.19.1/full/"})`). The script should probably provide a function or variable that tells you the version (if it's not possible for it to recapitulate the entirety of its own URL) so that the version doesn't have to be repeated for `loadPyodide`.
### Motivation
Having to specify the same version twice makes it easy to accidentally desynchronize the versions.
| > if it's not possible for it to recapitulate the entirety of its own URL
There is no way to do this in general I don't think. In an es6 module you can use [`import.meta`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/import.meta), in the main thread you can use [`document.currentScript`](https://developer.mozilla.org/en-US/docs/Web/API/Document/currentScript) but if you are loaded with `importScripts` into a classic worker I think there is nothing. That isn't even getting into all the different ways we could be loaded in node, but the node situation might be easier. Arguably the classic worker use case is the most important though and I think it's not solvable. At least between html.spec.whatwg, MDN, and stack overflow I find no mention of how to deal with this.
> The script should probably provide a function or variable that tells you the version
Seems like a reasonable request. Currently we define the version in the Pyodide package `__init__.py` file and extract it at runtime using Python. We could import pyodide at build time and pull the version out and then inject it into the Javascript bundle.
You could hard-code the CDN URL into each version before you upload it to the CDN, but that would only work if you loaded the script from the CDN, so I'm not sure how useful it would be.
That's not a good idea, if we go that route we are going to screw up the deploy.
Well it seems like there is an approach that works by throwing and catching an error and then parsing the file name out of the stack trace. | 2022-03-19T20:49:34 |
pyodide/pyodide | 2,305 | pyodide__pyodide-2305 | [
"1627"
] | 5f52842a579ce4e7a18a3a3c157203ab53ed4257 | diff --git a/conftest.py b/conftest.py
--- a/conftest.py
+++ b/conftest.py
@@ -385,12 +385,16 @@ class NodeWrapper(SeleniumWrapper):
browser = "node"
def init_node(self):
- self.p = pexpect.spawn(
- f"node --expose-gc --experimental-wasm-bigint ./src/test-js/node_test_driver.js {self.base_url}",
- timeout=60,
- )
+ self.p = pexpect.spawn("/bin/bash", timeout=60)
self.p.setecho(False)
self.p.delaybeforesend = None
+ # disable canonical input processing mode to allow sending longer lines
+ # See: https://pexpect.readthedocs.io/en/stable/api/pexpect.html#pexpect.spawn.send
+ self.p.sendline("stty -icanon")
+ self.p.sendline(
+ f"node --expose-gc --experimental-wasm-bigint ./src/test-js/node_test_driver.js {self.base_url}",
+ )
+
try:
self.p.expect_exact("READY!!")
except pexpect.exceptions.EOF:
diff --git a/pyodide-build/pyodide_build/buildall.py b/pyodide-build/pyodide_build/buildall.py
--- a/pyodide-build/pyodide_build/buildall.py
+++ b/pyodide-build/pyodide_build/buildall.py
@@ -206,13 +206,18 @@ def generate_dependency_graph(
if "*" in packages:
packages.discard("*")
packages.update(
- str(x) for x in packages_dir.iterdir() if (x / "meta.yaml").is_file()
+ str(x.name) for x in packages_dir.iterdir() if (x / "meta.yaml").is_file()
)
no_numpy_dependents = "no-numpy-dependents" in packages
if no_numpy_dependents:
packages.discard("no-numpy-dependents")
+ packages_exclude = list(filter(lambda pkg: pkg.startswith("!"), packages))
+ for pkg_exclude in packages_exclude:
+ packages.discard(pkg_exclude)
+ packages.discard(pkg_exclude[1:])
+
while packages:
pkgname = packages.pop()
diff --git a/pyodide-build/pyodide_build/pywasmcross.py b/pyodide-build/pyodide_build/pywasmcross.py
--- a/pyodide-build/pyodide_build/pywasmcross.py
+++ b/pyodide-build/pyodide_build/pywasmcross.py
@@ -494,6 +494,7 @@ def handle_command_generate_args(
if result:
new_args.append(result)
+
return new_args
@@ -527,6 +528,19 @@ def handle_command(
if args.pkgname == "scipy":
scipy_fixes(new_args)
+ # FIXME: For some unknown reason,
+ # opencv-python tries to link a same library (libopencv_world.a) multiple times,
+ # which leads to 'duplicated symbols' error.
+ if args.pkgname == "opencv-python":
+ duplicated_lib = "libopencv_world.a"
+ _new_args = []
+ for arg in new_args:
+ if duplicated_lib in arg and arg in _new_args:
+ continue
+ _new_args.append(arg)
+
+ new_args = _new_args
+
returncode = subprocess.run(new_args).returncode
if returncode != 0:
sys.exit(returncode)
| diff --git a/packages/opencv-python/test_opencv_python.py b/packages/opencv-python/test_opencv_python.py
new file mode 100644
--- /dev/null
+++ b/packages/opencv-python/test_opencv_python.py
@@ -0,0 +1,551 @@
+import base64
+import pathlib
+
+from pyodide_build.testing import run_in_pyodide
+
+REFERENCE_IMAGES_PATH = pathlib.Path(__file__).parent / "reference-images"
+
+
+def compare_with_reference_image(selenium, reference_image, var="img", grayscale=True):
+ reference_image_encoded = base64.b64encode(reference_image.read_bytes())
+ grayscale = "cv.IMREAD_GRAYSCALE" if grayscale else "cv.IMREAD_COLOR"
+ match_ratio = selenium.run(
+ f"""
+ import base64
+ import numpy as np
+ import cv2 as cv
+ DIFF_THRESHOLD = 2
+ arr = np.frombuffer(base64.b64decode({reference_image_encoded!r}), np.uint8)
+ ref_data = cv.imdecode(arr, {grayscale})
+
+ pixels_match = np.count_nonzero(np.abs({var}.astype(np.int16) - ref_data.astype(np.int16)) <= DIFF_THRESHOLD)
+ pixels_total = ref_data.size
+ float(pixels_match / pixels_total)
+ """
+ )
+
+ # Due to some randomness in the result, we allow a small difference
+ return match_ratio > 0.95
+
+
+def test_import(selenium):
+ selenium.set_script_timeout(60)
+ selenium.load_package("opencv-python")
+ selenium.run(
+ """
+ import cv2
+ cv2.__version__
+ """
+ )
+
+
+@run_in_pyodide(packages=["opencv-python", "numpy"])
+def test_image_extensions():
+ import cv2 as cv
+ import numpy as np
+
+ shape = (16, 16, 3)
+ img = np.zeros(shape, np.uint8)
+
+ extensions = {
+ "bmp": b"BM6\x03",
+ "jpg": b"\xff\xd8\xff\xe0",
+ "jpeg": b"\xff\xd8\xff\xe0",
+ "png": b"\x89PNG",
+ "webp": b"RIFF",
+ }
+
+ for ext, signature in extensions.items():
+ result, buf = cv.imencode(f".{ext}", img)
+ assert result
+ assert bytes(buf[:4]) == signature
+
+
+@run_in_pyodide(packages=["opencv-python", "numpy"])
+def test_io():
+ import cv2 as cv
+ import numpy as np
+
+ shape = (16, 16, 3)
+ img = np.zeros(shape, np.uint8)
+
+ filename = "test.bmp"
+ cv.imwrite(filename, img)
+ img_ = cv.imread(filename)
+ assert img_.shape == img.shape
+
+
+@run_in_pyodide(packages=["opencv-python", "numpy"])
+def test_drawing():
+ import cv2 as cv
+ import numpy as np
+
+ width = 100
+ height = 100
+ shape = (width, height, 3)
+ img = np.zeros(shape, np.uint8)
+
+ cv.line(img, (0, 0), (width - 1, 0), (255, 0, 0), 5)
+ cv.line(img, (0, 0), (0, height - 1), (0, 0, 255), 5)
+ cv.rectangle(img, (0, 0), (width // 2, height // 2), (0, 255, 0), 2)
+ cv.circle(img, (0, 0), radius=width // 2, color=(255, 0, 0))
+ cv.putText(img, "Hello Pyodide", (0, 0), cv.FONT_HERSHEY_SIMPLEX, 1, (255, 0, 0), 2)
+
+
+@run_in_pyodide(packages=["opencv-python", "numpy"])
+def test_pixel_access():
+ import cv2 as cv
+ import numpy as np
+
+ shape = (16, 16, 3)
+ img = np.zeros(shape, np.uint8)
+
+ img[5, 5] = [1, 2, 3]
+ assert list(img[5, 5]) == [1, 2, 3]
+
+ b, g, r = cv.split(img)
+ img_ = cv.merge([b, g, r])
+ assert (img == img_).all()
+
+
+@run_in_pyodide(packages=["opencv-python", "numpy"])
+def test_image_processing():
+ import cv2 as cv
+ import numpy as np
+
+ # Masking
+ img = np.random.randint(0, 255, size=500)
+ lower = np.array([0])
+ upper = np.array([200])
+ mask = cv.inRange(img, lower, upper)
+ res = cv.bitwise_and(img, img, mask=mask)
+ assert not (res > 200).any()
+
+
+def test_edge_detection(selenium):
+ original_img = base64.b64encode((REFERENCE_IMAGES_PATH / "baboon.png").read_bytes())
+ selenium.load_package("opencv-python")
+ selenium.run(
+ f"""
+ import base64
+ import cv2 as cv
+ import numpy as np
+ src = np.frombuffer(base64.b64decode({original_img!r}), np.uint8)
+ src = cv.imdecode(src, cv.IMREAD_COLOR)
+ gray = cv.cvtColor(src, cv.COLOR_BGR2GRAY)
+ sobel = cv.Sobel(gray, cv.CV_8U, 1, 0, 3)
+ laplacian = cv.Laplacian(gray, cv.CV_8U, ksize=3)
+ canny = cv.Canny(src, 100, 255)
+ None
+ """
+ )
+
+ assert compare_with_reference_image(
+ selenium, REFERENCE_IMAGES_PATH / "baboon_sobel.png", "sobel"
+ )
+ assert compare_with_reference_image(
+ selenium, REFERENCE_IMAGES_PATH / "baboon_laplacian.png", "laplacian"
+ )
+ assert compare_with_reference_image(
+ selenium, REFERENCE_IMAGES_PATH / "baboon_canny.png", "canny"
+ )
+
+
+def test_photo_decolor(selenium):
+ original_img = base64.b64encode((REFERENCE_IMAGES_PATH / "baboon.png").read_bytes())
+ selenium.load_package("opencv-python")
+ selenium.run(
+ f"""
+ import base64
+ import cv2 as cv
+ import numpy as np
+ src = np.frombuffer(base64.b64decode({original_img!r}), np.uint8)
+ src = cv.imdecode(src, cv.IMREAD_COLOR)
+ grayscale, color_boost = cv.decolor(src)
+ None
+ """
+ )
+
+ assert compare_with_reference_image(
+ selenium, REFERENCE_IMAGES_PATH / "baboon_decolor_grayscale.png", "grayscale"
+ )
+ assert compare_with_reference_image(
+ selenium,
+ REFERENCE_IMAGES_PATH / "baboon_decolor_color_boost.png",
+ "color_boost",
+ grayscale=False,
+ )
+
+
+def test_stitch(selenium):
+ original_img_left = base64.b64encode(
+ (REFERENCE_IMAGES_PATH / "mountain1.png").read_bytes()
+ )
+ original_img_right = base64.b64encode(
+ (REFERENCE_IMAGES_PATH / "mountain2.png").read_bytes()
+ )
+ selenium.load_package("opencv-python")
+ selenium.run(
+ f"""
+ import base64
+ import cv2 as cv
+ import numpy as np
+ left = np.frombuffer(base64.b64decode({original_img_left!r}), np.uint8)
+ left = cv.imdecode(left, cv.IMREAD_COLOR)
+ right = np.frombuffer(base64.b64decode({original_img_right!r}), np.uint8)
+ right = cv.imdecode(right, cv.IMREAD_COLOR)
+ stitcher = cv.Stitcher.create(cv.Stitcher_PANORAMA)
+ status, panorama = stitcher.stitch([left, right])
+
+ # It seems that the result is not always the same due to the randomness, so check the status and size instead
+ assert status == cv.Stitcher_OK
+ assert panorama.shape[0] >= max(left.shape[0], right.shape[0])
+ assert panorama.shape[1] >= max(left.shape[1], right.shape[1])
+ """
+ )
+
+
+def test_video_optical_flow(selenium):
+ original_img = base64.b64encode(
+ (REFERENCE_IMAGES_PATH / "traffic.mp4").read_bytes()
+ )
+ selenium.load_package("opencv-python")
+ selenium.run(
+ f"""
+ import base64
+ import cv2 as cv
+ import numpy as np
+
+ src = base64.b64decode({original_img!r})
+
+ video_path = "video.mp4"
+ with open(video_path, "wb") as f:
+ f.write(src)
+
+ cap = cv.VideoCapture(video_path)
+ assert cap.isOpened()
+
+ # params for ShiTomasi corner detection
+ feature_params = dict( maxCorners = 100,
+ qualityLevel = 0.3,
+ minDistance = 7,
+ blockSize = 7 )
+ # Parameters for lucas kanade optical flow
+ lk_params = dict( winSize = (15, 15),
+ maxLevel = 2,
+ criteria = (cv.TERM_CRITERIA_EPS | cv.TERM_CRITERIA_COUNT, 10, 0.03))
+
+ # Take first frame and find corners in it
+ ret, old_frame = cap.read()
+ assert ret
+
+ old_gray = cv.cvtColor(old_frame, cv.COLOR_BGR2GRAY)
+ p0 = cv.goodFeaturesToTrack(old_gray, mask = None, **feature_params)
+ # Create a mask image for drawing purposes
+ mask = np.zeros_like(old_frame)
+ while(1):
+ ret, frame = cap.read()
+ if not ret:
+ break
+ frame_gray = cv.cvtColor(frame, cv.COLOR_BGR2GRAY)
+ # calculate optical flow
+ p1, st, err = cv.calcOpticalFlowPyrLK(old_gray, frame_gray, p0, None, **lk_params)
+ # Select good points
+ if p1 is not None:
+ good_new = p1[st==1]
+ good_old = p0[st==1]
+ # draw the tracks
+ for i, (new, old) in enumerate(zip(good_new, good_old)):
+ a, b = new.ravel()
+ c, d = old.ravel()
+ mask = cv.line(mask, (int(a), int(b)), (int(c), int(d)), [0, 0, 255], 2)
+ frame = cv.circle(frame, (int(a), int(b)), 5, [255, 0, 0], -1)
+ img = cv.add(frame, mask)
+ # Now update the previous frame and previous points
+ old_gray = frame_gray.copy()
+ p0 = good_new.reshape(-1, 1, 2)
+
+ optical_flow = img
+ None
+ """
+ )
+
+ assert compare_with_reference_image(
+ selenium,
+ REFERENCE_IMAGES_PATH / "traffic_optical_flow.png",
+ "optical_flow",
+ grayscale=False,
+ )
+
+
+def test_flann_sift(selenium):
+ original_img_src1 = base64.b64encode(
+ (REFERENCE_IMAGES_PATH / "box.png").read_bytes()
+ )
+ original_img_src2 = base64.b64encode(
+ (REFERENCE_IMAGES_PATH / "box_in_scene.png").read_bytes()
+ )
+ selenium.load_package("opencv-python")
+ selenium.run(
+ f"""
+ import base64
+ import cv2 as cv
+ import numpy as np
+ src1 = np.frombuffer(base64.b64decode({original_img_src1!r}), np.uint8)
+ src1 = cv.imdecode(src1, cv.IMREAD_GRAYSCALE)
+ src2 = np.frombuffer(base64.b64decode({original_img_src2!r}), np.uint8)
+ src2 = cv.imdecode(src2, cv.IMREAD_GRAYSCALE)
+
+ #-- Step 1: Detect the keypoints using SIFT Detector, compute the descriptors
+ detector = cv.SIFT_create()
+ keypoints1, descriptors1 = detector.detectAndCompute(src1, None)
+ keypoints2, descriptors2 = detector.detectAndCompute(src2, None)
+
+ #-- Step 2: Matching descriptor vectors with a FLANN based matcher
+ matcher = cv.DescriptorMatcher_create(cv.DescriptorMatcher_FLANNBASED)
+ knn_matches = matcher.knnMatch(descriptors1, descriptors2, 2)
+
+ #-- Filter matches using the Lowe's ratio test
+ ratio_thresh = 0.3
+ good_matches = []
+ for m,n in knn_matches:
+ if m.distance < ratio_thresh * n.distance:
+ good_matches.append(m)
+
+ #-- Draw matches
+ matches = np.empty((max(src1.shape[0], src2.shape[0]), src1.shape[1]+src2.shape[1], 3), dtype=np.uint8)
+ cv.drawMatches(src1, keypoints1, src2, keypoints2, good_matches, matches, matchColor=[255, 0, 0], flags=cv.DrawMatchesFlags_NOT_DRAW_SINGLE_POINTS)
+
+ sift_result = cv.cvtColor(matches, cv.COLOR_BGR2GRAY)
+ None
+ """
+ )
+
+ assert compare_with_reference_image(
+ selenium,
+ REFERENCE_IMAGES_PATH / "box_sift.png",
+ "sift_result",
+ grayscale=True,
+ )
+
+
+def test_dnn_mnist(selenium):
+ """
+ Run tiny MNIST classification ONNX model
+ Training script: https://github.com/ryanking13/torch-opencv-mnist
+ """
+
+ original_img = base64.b64encode(
+ (REFERENCE_IMAGES_PATH / "mnist_2.png").read_bytes()
+ )
+ tf_model = base64.b64encode((REFERENCE_IMAGES_PATH / "mnist.onnx").read_bytes())
+ selenium.load_package("opencv-python")
+ selenium.run(
+ f"""
+ import base64
+ import cv2 as cv
+ import numpy as np
+
+ model_weights = base64.b64decode({tf_model!r})
+ model_weights_path = './mnist.onnx'
+ with open(model_weights_path, 'wb') as f:
+ f.write(model_weights)
+
+ src = np.frombuffer(base64.b64decode({original_img!r}), np.uint8)
+ src = cv.imdecode(src, cv.IMREAD_GRAYSCALE)
+
+ net = cv.dnn.readNet(model_weights_path)
+ blob = cv.dnn.blobFromImage(src, 1.0, (28, 28), (0, 0, 0), False, False)
+
+ net.setInput(blob)
+ prob = net.forward()
+ assert "output_0" in net.getLayerNames()
+ assert np.argmax(prob) == 2
+ """
+ )
+
+
+def test_ml_pca(selenium):
+ original_img = base64.b64encode((REFERENCE_IMAGES_PATH / "pca.png").read_bytes())
+ selenium.load_package("opencv-python")
+ selenium.run(
+ f"""
+ import base64
+ import cv2 as cv
+ import numpy as np
+ from math import atan2, cos, sin, sqrt, pi
+
+ def drawAxis(img, p_, q_, colour, scale):
+ p = list(p_)
+ q = list(q_)
+
+ angle = atan2(p[1] - q[1], p[0] - q[0]) # angle in radians
+ hypotenuse = sqrt((p[1] - q[1]) * (p[1] - q[1]) + (p[0] - q[0]) * (p[0] - q[0]))
+ # Here we lengthen the arrow by a factor of scale
+ q[0] = p[0] - scale * hypotenuse * cos(angle)
+ q[1] = p[1] - scale * hypotenuse * sin(angle)
+ cv.line(img, (int(p[0]), int(p[1])), (int(q[0]), int(q[1])), colour, 1, cv.LINE_AA)
+ # create the arrow hooks
+ p[0] = q[0] + 9 * cos(angle + pi / 4)
+ p[1] = q[1] + 9 * sin(angle + pi / 4)
+ cv.line(img, (int(p[0]), int(p[1])), (int(q[0]), int(q[1])), colour, 1, cv.LINE_AA)
+ p[0] = q[0] + 9 * cos(angle - pi / 4)
+ p[1] = q[1] + 9 * sin(angle - pi / 4)
+ cv.line(img, (int(p[0]), int(p[1])), (int(q[0]), int(q[1])), colour, 1, cv.LINE_AA)
+
+ def getOrientation(pts, img):
+
+ sz = len(pts)
+ data_pts = np.empty((sz, 2), dtype=np.float64)
+ for i in range(data_pts.shape[0]):
+ data_pts[i,0] = pts[i,0,0]
+ data_pts[i,1] = pts[i,0,1]
+ # Perform PCA analysis
+ mean = np.empty((0))
+ mean, eigenvectors, eigenvalues = cv.PCACompute2(data_pts, mean)
+ # Store the center of the object
+ cntr = (int(mean[0,0]), int(mean[0,1]))
+
+
+ cv.circle(img, cntr, 3, (255, 0, 255), 2)
+ p1 = (cntr[0] + 0.02 * eigenvectors[0,0] * eigenvalues[0,0], cntr[1] + 0.02 * eigenvectors[0,1] * eigenvalues[0,0])
+ p2 = (cntr[0] - 0.02 * eigenvectors[1,0] * eigenvalues[1,0], cntr[1] - 0.02 * eigenvectors[1,1] * eigenvalues[1,0])
+ drawAxis(img, cntr, p1, (0, 255, 0), 1)
+ drawAxis(img, cntr, p2, (255, 255, 0), 5)
+ angle = atan2(eigenvectors[0,1], eigenvectors[0,0]) # orientation in radians
+
+ return angle
+
+ src = np.frombuffer(base64.b64decode({original_img!r}), np.uint8)
+ src = cv.imdecode(src, cv.IMREAD_COLOR)
+ gray = cv.cvtColor(src, cv.COLOR_BGR2GRAY)
+
+ # Convert image to binary
+ _, bw = cv.threshold(gray, 50, 255, cv.THRESH_BINARY | cv.THRESH_OTSU)
+ contours, _ = cv.findContours(bw, cv.RETR_LIST, cv.CHAIN_APPROX_NONE)
+ for i, c in enumerate(contours):
+ # Calculate the area of each contour
+ area = cv.contourArea(c)
+ # Ignore contours that are too small or too large
+ if area < 1e2 or 1e5 < area:
+ continue
+ # Draw each contour only for visualisation purposes
+ cv.drawContours(src, contours, i, (0, 0, 255), 2)
+ # Find the orientation of each shape
+ getOrientation(c, src)
+
+ pca_result = src
+ None
+ """
+ )
+
+ assert compare_with_reference_image(
+ selenium,
+ REFERENCE_IMAGES_PATH / "pca_result.png",
+ "pca_result",
+ grayscale=False,
+ )
+
+
+def test_objdetect_face(selenium):
+ original_img = base64.b64encode(
+ (REFERENCE_IMAGES_PATH / "monalisa.png").read_bytes()
+ )
+ selenium.load_package("opencv-python")
+ selenium.run(
+ f"""
+ import base64
+ import cv2 as cv
+ import numpy as np
+ from pathlib import Path
+
+ src = np.frombuffer(base64.b64decode({original_img!r}), np.uint8)
+ src = cv.imdecode(src, cv.IMREAD_COLOR)
+ gray = cv.cvtColor(src, cv.COLOR_BGR2GRAY)
+ gray = cv.equalizeHist(gray)
+
+ face_cascade = cv.CascadeClassifier()
+ eyes_cascade = cv.CascadeClassifier()
+ data_path = Path(cv.data.haarcascades)
+ face_cascade.load(str(data_path / "haarcascade_frontalface_alt.xml"))
+ eyes_cascade.load(str(data_path / "haarcascade_eye_tree_eyeglasses.xml"))
+
+ faces = face_cascade.detectMultiScale(gray)
+ face_detected = src.copy()
+ for (x,y,w,h) in faces:
+ center = (x + w//2, y + h//2)
+ face_detected = cv.ellipse(face_detected, center, (w//2, h//2), 0, 0, 360, (255, 0, 255), 4)
+ faceROI = gray[y:y+h,x:x+w]
+ eyes = eyes_cascade.detectMultiScale(faceROI)
+ for (x2,y2,w2,h2) in eyes:
+ eye_center = (x + x2 + w2//2, y + y2 + h2//2)
+ radius = int(round((w2 + h2)*0.25))
+ face_detected = cv.circle(face_detected, eye_center, radius, (255, 0, 0 ), 4)
+
+ None
+ """
+ )
+
+ assert compare_with_reference_image(
+ selenium,
+ REFERENCE_IMAGES_PATH / "monalisa_facedetect.png",
+ "face_detected",
+ grayscale=False,
+ )
+
+
+def test_feature2d_kaze(selenium):
+ original_img = base64.b64encode((REFERENCE_IMAGES_PATH / "baboon.png").read_bytes())
+ selenium.load_package("opencv-python")
+ selenium.run(
+ f"""
+ import base64
+ import cv2 as cv
+ import numpy as np
+ src = np.frombuffer(base64.b64decode({original_img!r}), np.uint8)
+ src = cv.imdecode(src, cv.IMREAD_COLOR)
+
+ detector = cv.KAZE_create()
+ keypoints = detector.detect(src)
+
+ kaze = cv.drawKeypoints(src, keypoints, None, color=(0, 0, 255), flags=cv.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
+ None
+ """
+ )
+
+ assert compare_with_reference_image(
+ selenium,
+ REFERENCE_IMAGES_PATH / "baboon_kaze.png",
+ "kaze",
+ grayscale=False,
+ )
+
+
+def test_calib3d_chessboard(selenium):
+ original_img = base64.b64encode(
+ (REFERENCE_IMAGES_PATH / "chessboard.png").read_bytes()
+ )
+ selenium.load_package("opencv-python")
+ selenium.run(
+ f"""
+ import base64
+ import cv2 as cv
+ import numpy as np
+ src = np.frombuffer(base64.b64decode({original_img!r}), np.uint8)
+ src = cv.imdecode(src, cv.IMREAD_COLOR)
+
+ criteria = (cv.TERM_CRITERIA_EPS + cv.TERM_CRITERIA_MAX_ITER, 30, 0.001)
+ gray = cv.cvtColor(src, cv.COLOR_BGR2GRAY)
+ ret, corners = cv.findChessboardCorners(gray, (9, 6), None)
+ cv.cornerSubPix(gray, corners, (11, 11), (-1, -1), criteria)
+ cv.drawChessboardCorners(gray, (9, 6), corners, ret)
+ chessboard_corners = gray
+ None
+ """
+ )
+
+ assert compare_with_reference_image(
+ selenium,
+ REFERENCE_IMAGES_PATH / "chessboard_corners.png",
+ "chessboard_corners",
+ )
| install opencv-python with micropip error
I am recently working on a project that use ML and this library is truly a game changer for project in js... Any luck to have opencv-python included in the list of the python libraries? thank you in advance
| meanwhile i tried to import the wheel of opencv-python with this:
```
pyodide.runPythonAsync(`
import micropip
await micropip.install('https://files.pythonhosted.org/packages/cf/e1/784c7092d391617ba808d1389f09c8ab2339bd1ab3c6ca117834c2a20914/opencv_python-4.0.1.23-cp27-cp27m-manylinux1_i686.whl')
`);
import cv2 as cv
```
but i got :
```
ImportError: Could not load dynamic lib: /lib/python3.8/site-packages/cv2/cv2.so
RuntimeError: abort(Assertion failed: need to see wasm magic number). Build with -s ASSERTIONS=1 for more info.
```
and if i use newer releases of the opencv-python, for instance :
https://files.pythonhosted.org/packages/f1/0d/507319eba1c12622aa26be102f5d2cf1025ee0447fed7c054d3ab429038e/opencv_python-4.4.0.40-cp35-cp35m-macosx_10_13_x86_64.whl
i get :
```
pyodideWorker error: Traceback (most recent call last):
File "/lib/python3.8/asyncio/futures.py", line 178, in result
raise self._exception
File "/lib/python3.8/asyncio/tasks.py", line 280, in __step
result = coro.send(None)
File "/lib/python3.8/site-packages/pyodide/_base.py", line 419, in eval_code_async
return await CodeRunner(
File "/lib/python3.8/site-packages/pyodide/_base.py", line 270, in run_async
coro = eval(mod, self.globals, self.locals)
File "<exec>", line 6, in <module>
File "/lib/python3.8/site-packages/cv2/__init__.py", line 5, in <module>
from .cv2 import *
ModuleNotFoundError: No module named 'cv2.cv2'
```
are not this valid wheel files to use ?
https://pypi.org/project/opencv-python/4.0.1.23/#files
or
https://pypi.org/project/opencv-python/4.4.0.40/#files
probably i miss something as python is new field for me.
thank you in advance
You may use opencv.js invoked from Python side. | 2022-03-23T07:24:41 |
pyodide/pyodide | 2,378 | pyodide__pyodide-2378 | [
"1973"
] | 95b1194945287aee450f9c19d8acdd7de9bc018f | diff --git a/cpython/adjust_sysconfig.py b/cpython/adjust_sysconfig.py
--- a/cpython/adjust_sysconfig.py
+++ b/cpython/adjust_sysconfig.py
@@ -23,7 +23,7 @@ def adjust_sysconfig(config_vars: dict[str, str]):
MAINCC="cc",
LDSHARED="cc",
LINKCC="cc",
- BLDSHARED="cc",
+ BLDSHARED="emcc -sSIDE_MODULE=1", # setuptools-rust looks at this
CXX="c++",
LDCXXSHARED="c++",
)
diff --git a/pyodide-build/pyodide_build/buildpkg.py b/pyodide-build/pyodide_build/buildpkg.py
--- a/pyodide-build/pyodide_build/buildpkg.py
+++ b/pyodide-build/pyodide_build/buildpkg.py
@@ -135,6 +135,7 @@ def get_bash_runner():
"PYTHONINCLUDE",
"NUMPY_LIB",
"PYODIDE_PACKAGE_ABI",
+ "HOME",
"HOSTINSTALLDIR",
"TARGETINSTALLDIR",
"SYSCONFIG_NAME",
@@ -149,6 +150,11 @@ def get_bash_runner():
"UNISOLATED_PACKAGES",
"WASM_LIBRARY_DIR",
"WASM_PKG_CONFIG_PATH",
+ "CARGO_BUILD_TARGET",
+ "CARGO_HOME",
+ "CARGO_TARGET_WASM32_UNKNOWN_EMSCRIPTEN_LINKER",
+ "RUSTFLAGS",
+ "PYO3_CONFIG_FILE",
]
} | {"PYODIDE": "1"}
if "PYODIDE_JOBS" in os.environ:
diff --git a/pyodide-build/pyodide_build/pywasmcross.py b/pyodide-build/pyodide_build/pywasmcross.py
--- a/pyodide-build/pyodide_build/pywasmcross.py
+++ b/pyodide-build/pyodide_build/pywasmcross.py
@@ -10,6 +10,7 @@
"""
import json
import os
+import shutil
import sys
IS_MAIN = __name__ == "__main__"
@@ -30,7 +31,7 @@
from pyodide_build import common
from pyodide_build._f2c_fixes import fix_f2c_input, fix_f2c_output, scipy_fixes
-symlinks = {"cc", "c++", "ld", "ar", "gcc", "gfortran"}
+symlinks = {"cc", "c++", "ld", "ar", "gcc", "gfortran", "cargo"}
def symlink_dir():
@@ -537,6 +538,13 @@ def handle_command(
if returncode != 0:
sys.exit(returncode)
+ # Rust gives output files a `.wasm` suffix, but we need them to have a `.so`
+ # suffix.
+ if line[0:2] == ["cargo", "rustc"]:
+ p = Path(args.builddir)
+ for x in p.glob("**/*.wasm"):
+ shutil.move(x, x.with_suffix(".so"))
+
sys.exit(returncode)
| diff --git a/packages/cryptography/test_cryptography.py b/packages/cryptography/test_cryptography.py
--- a/packages/cryptography/test_cryptography.py
+++ b/packages/cryptography/test_cryptography.py
@@ -17,149 +17,6 @@ def test_cryptography(selenium):
assert f1.decrypt(f.encrypt(b"abc")) == b"abc"
-@run_in_pyodide(packages=["cryptography", "pytest"])
-def test_der_reader_basic(selenium):
- import pytest
- from cryptography.hazmat._der import DERReader
-
- reader = DERReader(b"123456789")
- assert reader.read_byte() == ord(b"1")
- assert reader.read_bytes(1).tobytes() == b"2"
- assert reader.read_bytes(4).tobytes() == b"3456"
-
- with pytest.raises(ValueError):
- reader.read_bytes(4)
-
- assert reader.read_bytes(3).tobytes() == b"789"
-
- # The input is now empty.
- with pytest.raises(ValueError):
- reader.read_bytes(1)
- with pytest.raises(ValueError):
- reader.read_byte()
-
-
-@run_in_pyodide(packages=["cryptography", "pytest"])
-def test_der(selenium):
- import pytest
- from cryptography.hazmat._der import (
- INTEGER,
- NULL,
- OCTET_STRING,
- SEQUENCE,
- DERReader,
- encode_der,
- encode_der_integer,
- )
-
- # This input is the following structure, using
- # https://github.com/google/der-ascii
- #
- # SEQUENCE {
- # SEQUENCE {
- # NULL {}
- # INTEGER { 42 }
- # OCTET_STRING { "hello" }
- # }
- # }
- der = b"\x30\x0e\x30\x0c\x05\x00\x02\x01\x2a\x04\x05\x68\x65\x6c\x6c\x6f"
- reader = DERReader(der)
- with pytest.raises(ValueError):
- reader.check_empty()
-
- with pytest.raises(ValueError):
- with reader:
- pass
-
- with pytest.raises(ZeroDivisionError):
- with DERReader(der):
- raise ZeroDivisionError
-
- # Parse the outer element.
- outer = reader.read_element(SEQUENCE)
- reader.check_empty()
- assert outer.data.tobytes() == der[2:]
-
- # Parse the outer element with read_any_element.
- reader = DERReader(der)
- tag, outer2 = reader.read_any_element()
- reader.check_empty()
- assert tag == SEQUENCE
- assert outer2.data.tobytes() == der[2:]
-
- # Parse the outer element with read_single_element.
- outer3 = DERReader(der).read_single_element(SEQUENCE)
- assert outer3.data.tobytes() == der[2:]
-
- # read_single_element rejects trailing data.
- with pytest.raises(ValueError):
- DERReader(der + der).read_single_element(SEQUENCE)
-
- # Continue parsing the structure.
- inner = outer.read_element(SEQUENCE)
- outer.check_empty()
-
- # Parsing a missing optional element should work.
- assert inner.read_optional_element(INTEGER) is None
-
- null = inner.read_element(NULL)
- null.check_empty()
-
- # Parsing a present optional element should work.
- integer = inner.read_optional_element(INTEGER)
- assert integer.as_integer() == 42
-
- octet_string = inner.read_element(OCTET_STRING)
- assert octet_string.data.tobytes() == b"hello"
-
- # Parsing a missing optional element should work when the input is empty.
- inner.check_empty()
- assert inner.read_optional_element(INTEGER) is None
-
- # Re-encode the same structure.
- der2 = encode_der(
- SEQUENCE,
- encode_der(
- SEQUENCE,
- encode_der(NULL),
- encode_der(INTEGER, encode_der_integer(42)),
- encode_der(OCTET_STRING, b"hello"),
- ),
- )
- assert der2 == der
-
-
-@run_in_pyodide(packages=["cryptography"])
-def test_der_lengths(selenium):
-
- from cryptography.hazmat._der import OCTET_STRING, DERReader, encode_der
-
- for [length, header] in [
- # Single-byte lengths.
- (0, b"\x04\x00"),
- (1, b"\x04\x01"),
- (2, b"\x04\x02"),
- (127, b"\x04\x7f"),
- # Long-form lengths.
- (128, b"\x04\x81\x80"),
- (129, b"\x04\x81\x81"),
- (255, b"\x04\x81\xff"),
- (0x100, b"\x04\x82\x01\x00"),
- (0x101, b"\x04\x82\x01\x01"),
- (0xFFFF, b"\x04\x82\xff\xff"),
- (0x10000, b"\x04\x83\x01\x00\x00"),
- ]:
- body = length * b"a"
- der = header + body
-
- reader = DERReader(der)
- element = reader.read_element(OCTET_STRING)
- reader.check_empty()
- assert element.data.tobytes() == body
-
- assert encode_der(OCTET_STRING, body) == der
-
-
@settings(suppress_health_check=[HealthCheck.too_slow], deadline=None)
@given(data=binary())
def test_fernet(selenium_module_scope, data):
| Building a Python package that uses Rust
## π Feature
Tooling and/or docs to support building Python packages that contain Rust code.
### Motivation
I was in the process of (attempting to) create a JS/wasm build of [Hugging Face Tokenizers](https://github.com/huggingface/tokenizers/) when I realised that it might just be easier/better for my use case to load it with Pyodide (especially since I might want to use Python packages that depend upon Tokenizers. The [docs show](https://github.com/pyodide/pyodide/blob/main/docs/development/new-packages.md) how to build a Python package that uses C (with emscripten), but not Rust.
Rust has great, simple wasm build support via [`wasm-pack`](https://github.com/rustwasm/wasm-pack). After annotating the Rust functions/modules that you want to export (and installing wasm pack with `cargo install wasm-pack`) it's just a single `wasm-pack build --target=web --out-dir=wasm` to get the wasm and glue files. [Here's a hello world tutorial](https://developer.mozilla.org/en-US/docs/WebAssembly/Rust_to_wasm).
So I'm wondering: Is this feasible now with the right configuration, or is there some extra work on the Pyodide dev tooling to get this working? If it's not possible with the current tooling, what might be the steps required to make it possible?
Thanks!
| Thanks for proposing it! There was an earlier analysis of feasibility in https://github.com/pyodide/pyodide/issues/1282#issuecomment-785459008 and it doesn't look very promising.
Maybe a start could be trying to build a very simple PyO3 Python library. Related https://github.com/PyO3/pyo3/issues/1522
Even if the end we manage to compile that library say for RustPython using the wasm-pack, you end up with a Python extension built for a different Python interpreter and the compatibility story there is not very good.
I wonder if it could be possible to convince Rust to emit llvm ir and then use emcc to do the backend compilation and linkage. Rust also has a wasm32-emscripten target but it's apparently pretty broken.
@hoodmane @rth Thanks for your tips! Having looked into this, it seems to be a bit over my head at my current level of knowledge. That said, it would be great if I could make a dent in at least part of the problem.
My understanding is that if we get a simple Python+Rust library compiled with PyO3 (with the Rust converted to wasm), then we'll be able to swap out Python with Pyodide? But from reading some of those Github issues sounds like PyO3 may create different interfaces compared to what Pyodide expects? Would someone be able to list the rough high-level problems that would likely need to be solved here?
Question: Since it should be pretty easy to compile the Rust code to wasm (+ js glue) with wasm pack (at least, going by my past experiences with wasm pack), I was wondering whether it would be possible to then import that JS module into Python/Pyodide in such a way that Pyodide essentially sees that JS module as it would the PyO3 Python module? Or would there be "interface" issues due to diferences in the way wasm pack and PyO3 generate glue code?
> I wonder if it could be possible to convince Rust to emit llvm ir
[The answer here](https://stackoverflow.com/questions/39004513/how-to-emit-llvm-ir-from-cargo) says you can emit llvm-ir with `cargo rustc -- --emit=llvm-ir`
> it seems to be a bit over my head at my current level of knowledge
One of the fun parts of Pyodide is that a lot of it is over our heads, at least until we figure it out =)
> Maybe a start could be trying to build a very simple PyO3 Python library.
I think perhaps even better might be if we could write a trivial rust extension without using PyO3. I just tried to do this for a while, but I keep getting errors at import time.
Here's my partial work on this:
https://github.com/hoodmane/minimal-rust-pyton-ext/blob/master/src/lib.rs
You can build with `cargo build` and then use
`cp target/debug/libpython_min_test.so mymod.cpython-39-x86_64-linux-gnu.so`
then can try to `import mymod` but there are errors.
Anyways, if we can get a module like this to work, then we could try to compile it to work with Pyodide. | 2022-04-09T00:40:13 |
pyodide/pyodide | 2,414 | pyodide__pyodide-2414 | [
"2408"
] | 07724919d089e4eef45e15c098efabdaa9e77761 | diff --git a/packages/micropip/src/micropip/_micropip.py b/packages/micropip/src/micropip/_micropip.py
--- a/packages/micropip/src/micropip/_micropip.py
+++ b/packages/micropip/src/micropip/_micropip.py
@@ -17,7 +17,7 @@
from pyodide import IN_BROWSER, to_js
from .externals.pip._internal.utils.wheel import pkg_resources_distribution_for_wheel
-from .package import PackageDict, PackageMetadata
+from .package import PackageDict, PackageMetadata, normalize_package_name
# Provide stubs for testing in native python
if IN_BROWSER:
@@ -205,7 +205,10 @@ async def _install(install_func, done_callback):
),
functools.partial(
self.installed_packages.update,
- {pkg.name: pkg for pkg in pyodide_packages},
+ {
+ normalize_package_name(pkg.name): pkg
+ for pkg in pyodide_packages
+ },
),
)
)
@@ -220,7 +223,11 @@ async def _install(install_func, done_callback):
_install_wheel(name, wheel),
functools.partial(
self.installed_packages.update,
- {name: PackageMetadata(name, str(ver), wheel_source)},
+ {
+ normalize_package_name(name): PackageMetadata(
+ name, str(ver), wheel_source
+ )
+ },
),
)
)
@@ -293,7 +300,11 @@ async def add_requirement(self, requirement: str | Requirement, ctx, transaction
)
async def add_wheel(self, name, wheel, version, extras, ctx, transaction):
- transaction["locked"][name] = PackageMetadata(name=name, version=version)
+ normalized_name = normalize_package_name(name)
+ transaction["locked"][normalized_name] = PackageMetadata(
+ name=name,
+ version=version,
+ )
try:
wheel_bytes = await fetch_bytes(wheel["url"])
@@ -312,10 +323,15 @@ async def add_wheel(self, name, wheel, version, extras, ctx, transaction):
with ZipFile(io.BytesIO(wheel_bytes)) as zip_file:
dist = pkg_resources_distribution_for_wheel(zip_file, name, "???")
+
+ project_name = dist.project_name
+ if project_name == "UNKNOWN":
+ project_name = name
+
for recurs_req in dist.requires(extras):
await self.add_requirement(recurs_req, ctx, transaction)
- transaction["wheels"].append((name, wheel, version))
+ transaction["wheels"].append((project_name, wheel, version))
def find_wheel(
self, metadata: dict[str, Any], req: Requirement
diff --git a/packages/micropip/src/micropip/package.py b/packages/micropip/src/micropip/package.py
--- a/packages/micropip/src/micropip/package.py
+++ b/packages/micropip/src/micropip/package.py
@@ -1,3 +1,4 @@
+import re
from collections import UserDict
from dataclasses import astuple, dataclass
from typing import Iterable
@@ -31,6 +32,10 @@ def format_row(values, widths, filler=""):
return "\n".join(rows)
+def normalize_package_name(pkgname: str):
+ return re.sub(r"[^\w\d.]+", "_", pkgname, re.UNICODE)
+
+
@dataclass
class PackageMetadata:
name: str
@@ -54,6 +59,14 @@ class PackageDict(UserDict):
def __repr__(self):
return self._tabularize()
+ def __getitem__(self, key):
+ normalized_key = normalize_package_name(key)
+ return super().__getitem__(normalized_key)
+
+ def __contains__(self, key):
+ normalized_key = normalize_package_name(key)
+ return super().__contains__(normalized_key)
+
def _tabularize(self):
headers = [key.capitalize() for key in PackageMetadata.keys()]
table = list(self.values())
| diff --git a/packages/micropip/test_micropip.py b/packages/micropip/test_micropip.py
--- a/packages/micropip/test_micropip.py
+++ b/packages/micropip/test_micropip.py
@@ -401,6 +401,28 @@ def test_list_wheel_package(monkeypatch):
assert "dummy" in pkg_list and pkg_list["dummy"].source.lower() == dummy_url
+def test_list_wheel_name_mismatch(monkeypatch):
+ pytest.importorskip("packaging")
+ from micropip import _micropip
+
+ dummy_pkg_name = "dummy-dummy"
+ dummy_url = (
+ f"https://dummy.com/{dummy_pkg_name.replace('-', '_')}-1.0.0-py3-none-any.whl"
+ )
+ _mock_fetch_bytes = mock_fetch_bytes(dummy_pkg_name, f"Name: {dummy_pkg_name}")
+
+ monkeypatch.setattr(_micropip, "fetch_bytes", _mock_fetch_bytes)
+
+ asyncio.get_event_loop().run_until_complete(_micropip.install(dummy_url))
+
+ pkg_list = _micropip._list()
+ assert (
+ dummy_pkg_name in pkg_list
+ and pkg_list[dummy_pkg_name].source.lower() == dummy_url
+ and pkg_list[dummy_pkg_name].name == dummy_pkg_name
+ )
+
+
def test_list_pyodide_package(selenium_standalone_micropip):
selenium = selenium_standalone_micropip
selenium.run_js(
| micropip does not resolve locally available versions
I have three wheels:
* travertino is v0.1.3
* toga-core is v0.3.0.dev33, and declares travertino>=0.1.3 as a requirement
* toga-web is v0.3.0.dev33, and declares toga-core==0.3.0.dev33 as a requirement
All three exist as `.whl` files in my static files folder. Travertino 0.1.3 has been published to PyPI; toga-core and toga-web dev32 have been published to PyPI, but dev33 are local development builds.
On my Pyodide page, I run:
```
import micropip
micropip.install([
'./static/travertino-0.1.3-py3-none-any.whl',
'./static/toga_core-0.3.0.dev33-py3-none-any.whl',
'./static/toga_web-0.3.0.dev33-py3-none-any.whl',
])
```
This raises the following exception:
```
Traceback (most recent call last):
File "/lib/python3.10/asyncio/futures.py", line 201, in result
raise self._exception
File "/lib/python3.10/asyncio/tasks.py", line 234, in __step
result = coro.throw(exc)
File "/lib/python3.10/site-packages/micropip/_micropip.py", line 183, in install
transaction = await self.gather_requirements(requirements, ctx, keep_going)
File "/lib/python3.10/site-packages/micropip/_micropip.py", line 173, in gather_requirements
await gather(*requirement_promises)
File "/lib/python3.10/asyncio/futures.py", line 284, in __await__
yield self # This tells Task to wait for completion.
File "/lib/python3.10/asyncio/tasks.py", line 304, in __wakeup
future.result()
File "/lib/python3.10/asyncio/futures.py", line 201, in result
raise self._exception
File "/lib/python3.10/asyncio/tasks.py", line 232, in __step
result = coro.send(None)
File "/lib/python3.10/site-packages/micropip/_micropip.py", line 245, in add_requirement
await self.add_wheel(name, wheel, version, (), ctx, transaction)
File "/lib/python3.10/site-packages/micropip/_micropip.py", line 316, in add_wheel
await self.add_requirement(recurs_req, ctx, transaction)
File "/lib/python3.10/site-packages/micropip/_micropip.py", line 286, in add_requirement
raise ValueError(
ValueError: Couldn't find a pure Python 3 wheel for 'toga-core==0.3.0.dev33'. You can use `micropip.install(..., keep_going=True)` to get a list of all packages with missing wheels.
```
If I relax the `install_requires` on toga-web to remove the declared dependency on toga-core, the import succeeds, and the code runs successfully.
I've tried splitting the install into two passes:
```
await micropip.install([
'./static/travertino-0.1.3-py3-none-any.whl',
'./static/toga_core-0.3.0.dev33-py3-none-any.whl',
])
await micropip.install([
'./static/toga_web-0.3.0.dev33-py3-none-any.whl',
])
```
thinking that if toga-core's installation was completed, toga-web would be able to find it; that makes no difference.
I've also tried producing the packages without the .dev33 version suffix, thinking that it might be "prerelease" versions that are the issue, but that made no difference either.
However, if I revert the version number on toga-core and toga-web to 0.3.0.dev32 (the last versions published to PyPI), the import *succeeds* - but the old version (published to PyPI) version of the toga-core code is used.
Looking at the network requests that the page has made indicates that the wheels were all loaded - and then micropip requested the toga-core PyPI metadata, and toga_core-0.3.0.dev32 (the most recent published version) was downloaded from PyPI. Interestingly, *no* request was made to load Travertino 0.1.3 from PyPI as a result of the dependency declared in toga-core.
| Thanks for the report. Yes I can confirm that this a bug and splitting the install should work.
The reason of this bug is that micropip is handling the package name differently when it is installed from PyPI or from `.whl` url.
```python
>>> import micropip
>>> await micropip.install("toga-web==0.3.0.dev32")
>>> micropip.list()
Name | Version | Source
---------- | ----------- | -------
travertino | 0.1.3 | pypi
toga-core | 0.3.0.dev32 | pypi
toga-web | 0.3.0.dev32 | pypi
...
```
```python
>>> import micropip
>>> await micropip.install("https://files.pythonhosted.org/packages/56/93/7bf199546e0de51404d7c714c618f9e67d215c2723351e1af0392e5e0a59/toga_web-0.3.0.dev32-py3-none-any.whl")
>>> micropip.list()
Name | Version | Source
---------- | ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------
travertino | 0.1.3 | pypi
toga-core | 0.3.0.dev32 | pypi
toga_web | 0.3.0.dev32 | https://files.pythonhosted.org/packages/56/93/7bf199546e0de51404d7c714c618f9e67d215c2723351e1af0392e5e0a59/toga_web-0.3.0.dev32-py3-none-any.whl
...
```
When installed from PyPI, micropip think its name is `toga-web`, while when installed from whl url, micropip parses the name from whl url: `toga_web`. | 2022-04-21T00:44:15 |
pyodide/pyodide | 2,433 | pyodide__pyodide-2433 | [
"2412"
] | ecaab15c12383fc318531d8942c0536f9248c014 | diff --git a/packages/micropip/src/micropip/_micropip.py b/packages/micropip/src/micropip/_micropip.py
--- a/packages/micropip/src/micropip/_micropip.py
+++ b/packages/micropip/src/micropip/_micropip.py
@@ -151,6 +151,7 @@ async def gather_requirements(
requirements: str | list[str],
ctx=None,
keep_going: bool = False,
+ deps: bool = True,
**kwargs,
):
ctx = ctx or default_environment()
@@ -164,6 +165,7 @@ async def gather_requirements(
"locked": copy.deepcopy(self.installed_packages),
"failed": [],
"keep_going": keep_going,
+ "deps": deps,
}
requirement_promises = []
for requirement in requirements:
@@ -179,6 +181,7 @@ async def install(
requirements: str | list[str],
ctx=None,
keep_going: bool = False,
+ deps: bool = True,
credentials: str | None = None,
):
async def _install(install_func, done_callback):
@@ -190,7 +193,7 @@ async def _install(install_func, done_callback):
if credentials:
fetch_extra_kwargs["credentials"] = credentials
transaction = await self.gather_requirements(
- requirements, ctx, keep_going, **fetch_extra_kwargs
+ requirements, ctx, keep_going, deps, **fetch_extra_kwargs
)
if transaction["failed"]:
@@ -355,8 +358,9 @@ async def add_wheel(
if project_name == "UNKNOWN":
project_name = name
- for recurs_req in dist.requires(extras):
- await self.add_requirement(recurs_req, ctx, transaction)
+ if transaction["deps"]:
+ for recurs_req in dist.requires(extras):
+ await self.add_requirement(recurs_req, ctx, transaction)
transaction["wheels"].append((project_name, wheel, version))
@@ -401,6 +405,7 @@ def find_wheel(
def install(
requirements: str | list[str],
keep_going: bool = False,
+ deps: bool = True,
credentials: str | None = None,
):
"""Install the given package and all of its dependencies.
@@ -441,6 +446,11 @@ def install(
- If ``True``, the micropip will keep going after the first error, and report a list
of errors at the end.
+ deps : ``bool``, default: True
+
+ If ``True``, install dependencies specified in METADATA file for
+ each package. Otherwise do not install dependencies.
+
credentials : ``Optional[str]``
This parameter specifies the value of ``credentials`` when calling the
@@ -460,7 +470,7 @@ def install(
importlib.invalidate_caches()
return asyncio.ensure_future(
PACKAGE_MANAGER.install(
- requirements, keep_going=keep_going, credentials=credentials
+ requirements, keep_going=keep_going, deps=deps, credentials=credentials
)
)
| diff --git a/packages/micropip/test_micropip.py b/packages/micropip/test_micropip.py
--- a/packages/micropip/test_micropip.py
+++ b/packages/micropip/test_micropip.py
@@ -189,13 +189,15 @@ def test_add_requirement():
base_url = f"http://{server_hostname}:{server_port}/"
url = base_url + "snowballstemmer-2.0.0-py2.py3-none-any.whl"
- transaction: dict[str, Any] = {
- "wheels": [],
- "locked": {},
- }
- asyncio.get_event_loop().run_until_complete(
- _micropip.PACKAGE_MANAGER.add_requirement(url, {}, transaction)
- )
+ transaction: dict[str, Any] = {
+ "wheels": [],
+ "locked": {},
+ "keep_going": True,
+ "deps": True,
+ }
+ asyncio.get_event_loop().run_until_complete(
+ _micropip.PACKAGE_MANAGER.add_requirement(url, {}, transaction)
+ )
[name, req, version] = transaction["wheels"][0]
assert name == "snowballstemmer"
@@ -348,6 +350,33 @@ def test_install_keep_going(monkeypatch):
)
+def test_install_no_deps(monkeypatch):
+ pytest.importorskip("packaging")
+ from micropip import _micropip
+
+ dummy_pkg_name = "dummy"
+ dep_pkg_name = "dependency_dummy"
+ _mock_get_pypi_json = mock_get_pypi_json(
+ {
+ dummy_pkg_name: f"{dummy_pkg_name}-1.0.0-py3-none-any.whl",
+ dep_pkg_name: f"{dep_pkg_name}-1.0.0-py3-none-any.whl",
+ }
+ )
+ _mock_fetch_bytes = mock_fetch_bytes(
+ dummy_pkg_name, f"Requires-Dist: {dep_pkg_name}\n\nUNKNOWN"
+ )
+
+ monkeypatch.setattr(_micropip, "_get_pypi_json", _mock_get_pypi_json)
+ monkeypatch.setattr(_micropip, "fetch_bytes", _mock_fetch_bytes)
+
+ asyncio.get_event_loop().run_until_complete(
+ _micropip.install(dummy_pkg_name, deps=False)
+ )
+
+ assert dummy_pkg_name in _micropip._list()
+ assert dep_pkg_name not in _micropip._list()
+
+
def test_fetch_wheel_fail(monkeypatch):
pytest.importorskip("packaging")
from micropip import _micropip
| Add micropip install no_deps option
It would be useful to add the option `no_deps=False` to `micropip.install` to disable dependency resolution. This can be useful in cases where
- the dependency resolution doesn't work as expected (in particular with custom wheels https://github.com/pyodide/pyodide/issues/2408)
- generally to be able to force install a set of packages, for instance ignoring requirements that should have been optional in an arbitrary PyPi package.
The no_deps name is by analogy with `pip install --no-deps`, though maybe `deps=True` would also work, although being not very explicit.
| We might also want to expose this without `micropip`. Like why not just `pyodide.loadPackage("some_wheel.whl")`? From a loading perspective it is a bit sad to do:
1. load Pyodide + distutils
2. Once all of that is finished, load micropip
3. Once that is finished load wheel
Without `micropip` we don't have `packaging` which we'd want to use to analyze the wheel metadata. Maybe we could record any wheels loaded with have `micropip` analyze the already loaded wheels when it is first imported since that metadata will be lying around in the file system even if we don't know how to read it easily without `packaging`.
Yes, particularly now that all packages are wheels we might want to merge together micropip.install, and pyodide.loadPackage somehow. Though I like that micropip is a somewhat independent Python package and not included inside core pyodide.
Though it's an orthogonal issue, for now, we could add this option to both.
Yeah I still don't think we should merge `micropip` with `loadPackage`. I think I've said this a bunch but we should add `micropip.freeze()` that creates a new `packages.json` with the currently loaded packages, and then if you use that `packages.json` on a subsequent load then you won't need micropip at all. This way people who only need a fixed set of dependencies from PyPI can resolve those dependencies once and in their deployed app they don't need to wait for micropip's slower dynamic resolution strategy. As you say, this is related but different than the current topic.
Also worth noting that it is possible to do a no-deps install for a pure Python wheel as follows:
```js
const wheel_promise = fetch("blah.whl").then(x => x.arrayBuffer());
const pyodide = await loadPyodide();
const buffer = await wheel_promise;
pyodide.unpackArchive(buffer, "whl");
pyodide.runPython(`
import blah
# ...
`);
```
This is the best possible option in the sense that the wheel can be downloaded concurrently with the rest of Pyodide.
But currently `micropip` won't know that `blah` is installed. One option would be to make micropip actually look at the `.dist-info` directories like real pip. Then it would know about wheels installed by whatever means.
> One option would be to make micropip actually look at the .dist-info directories like real pip. Then it would know about wheels installed by whatever means.
Agreed. Looking at those instead of having our own system to keep track of state seems like a good idea.
> > One option would be to make micropip actually look at the .dist-info directories like real pip. Then it would know about wheels installed by whatever means.
>
> Agreed. Looking at those instead of having our own system to keep track of state seems like a good idea.
+1 to it. | 2022-05-02T01:42:51 |
pyodide/pyodide | 2,507 | pyodide__pyodide-2507 | [
"2503"
] | ca60ebc51e4b355a8149625e8d86070ce21a1fd8 | diff --git a/pyodide-build/pyodide_build/pypabuild.py b/pyodide-build/pyodide_build/pypabuild.py
--- a/pyodide-build/pyodide_build/pypabuild.py
+++ b/pyodide-build/pyodide_build/pypabuild.py
@@ -19,7 +19,7 @@
from .common import get_hostsitepackages, get_pyversion
-UNISOLATED_PACKAGES = ["numpy", "scipy", "cffi", "pycparser", "pythran", "cython"]
+UNISOLATED_PACKAGES = ["numpy", "scipy", "cffi", "pycparser", "pythran"]
def symlink_unisolated_packages(env: IsolatedEnv):
@@ -39,7 +39,7 @@ def remove_unisolated_requirements(requires: set[str]) -> set[str]:
for reqstr in list(requires):
req = Requirement(reqstr)
for avoid_name in UNISOLATED_PACKAGES:
- if avoid_name in req.name:
+ if avoid_name in req.name.lower():
requires.remove(reqstr)
return requires
@@ -58,6 +58,11 @@ def replace_env(build_env: Mapping[str, str]):
def install_reqs(env: IsolatedEnv, reqs: set[str]):
env.install(remove_unisolated_requirements(reqs))
+ # Some packages (numcodecs) don't declare cython as a build dependency and
+ # only recythonize if it is present. We need them to always recythonize so
+ # we always install cython. If the reqs included some cython version already
+ # then this won't do anything.
+ env.install(["cython"])
def _build_in_isolated_env(
@@ -66,6 +71,10 @@ def _build_in_isolated_env(
outdir: str,
distribution: str,
) -> str:
+ # For debugging: The following line disables removal of the isolated venv.
+ # It will be left in the /tmp folder and can be inspected or entered as
+ # needed.
+ # _IsolatedEnvBuilder.__exit__ = lambda *args: None
with _IsolatedEnvBuilder() as env:
builder.python_executable = env.executable
builder.scripts_dir = env.scripts_dir
| pyodide_build buildpkg does not install Cython as a build dependency when it is spell with a lower case c
## π Bug
When trying to build [cftime](https://github.com/Unidata/cftime) the isolated env does not install cython.
### To Reproduce
`python -m pyodide_build buildpkg packages/cftime/meta.yaml` on [this meta.yaml](https://gist.github.com/ocefpaf/8b9a90bfa40d7dc27c63e3bf22ef335a)
### Expected behavior
Successful build :smile:
### Environment
- Pyodide Version<!-- (e.g. 1.8.1) -->:
- Browser version<!-- (e.g. Chrome 95.0.4638.54) -->:
- Any other relevant information:
### Additional context
A patch to rename `cython` to `Cython` in the cftime pyproject.toml fixed it but we should not be case sensitive with PyPI names.
xref.: https://github.com/pyodide/pyodide/pull/2504
| Thanks for the report!
I think this line needs to change:
https://github.com/pyodide/pyodide/blob/d74bc74c9d76ce1d1579fd0af0a77845428cae1b/pyodide-build/pyodide_build/pypabuild.py#L42
to
```py
if avoid_name in req.name.lower():
```
Should I send a PR or do you prefer to send it?
I'm not completely sure if this is a correct fix, could you try it and see? I will also test it out. I'm actually a bit confused. It would be great if you would make the PR, but only if this is the right fix.
Working on it...
I don't think this fixes it...
So it does work to remove `cython` from `UNISOLATED_PACKAGES`. I can't remember why I needed to put it in there right now, so we could try removing it. I suspect the build will fail somewhere but maybe we can find a different fix for that. The comments on #2272 may give some clues.
> Cython I unisolated specifically because numcodecs needs it but it isn't in the numcodecs build dependencies.
Okay so this is a hack to fix numcodecs. In that case we should really patch numcodes.
I think the problem is that some packages don't declare that they need cython and their plan is not to recythonize if it isn't present. But we need them to recythonize because we need different C files for wasm. So we want to include some version of it always.
I tested and removing cython from `UNISOLATED_PACKAGES` does work. I understand the conundrum here b/c some projects don't even declare cython as a build requirement. However, as a packager, I believe that is bad practice. We should always re-cythonize. The main reason is probably b/c these packages predates PEP 518, which made it easier to declare a build environment.
TL;DR what is your opinion, patch the packages like `numcodecs` or those who use the lower case `cython`? It is a hard choice that I don't want to make :smile:
I have a solution to both, PR incoming. | 2022-05-05T14:10:40 |
|
pyodide/pyodide | 2,532 | pyodide__pyodide-2532 | [
"2487"
] | 6b01b20a12f347a6798d8cbcf9bc85f4755c8be8 | diff --git a/packages/micropip/src/micropip/_micropip.py b/packages/micropip/src/micropip/_micropip.py
--- a/packages/micropip/src/micropip/_micropip.py
+++ b/packages/micropip/src/micropip/_micropip.py
@@ -275,13 +275,13 @@ async def add_requirement(
return
else:
req = Requirement(requirement)
+
req.name = req.name.lower()
# If there's a Pyodide package that matches the version constraint, use
# the Pyodide package instead of the one on PyPI
- if (
- req.name in BUILTIN_PACKAGES
- and BUILTIN_PACKAGES[req.name]["version"] in req.specifier
+ if req.name in BUILTIN_PACKAGES and req.specifier.contains(
+ BUILTIN_PACKAGES[req.name]["version"], prereleases=True
):
version = BUILTIN_PACKAGES[req.name]["version"]
transaction["pyodide_packages"].append(
@@ -298,7 +298,7 @@ async def add_requirement(
# Is some version of this package is already installed?
if req.name in transaction["locked"]:
ver = transaction["locked"][req.name].version
- if ver in req.specifier:
+ if req.specifier.contains(ver, prereleases=True):
# installed version matches, nothing to do
return
else:
| diff --git a/packages/micropip/test_micropip.py b/packages/micropip/test_micropip.py
--- a/packages/micropip/test_micropip.py
+++ b/packages/micropip/test_micropip.py
@@ -16,7 +16,7 @@ def mock_get_pypi_json(pkg_map):
Parameters
----------
- pkg_map : ``None | Dict[str, str]``
+ pkg_map : ``None | Dict[str, Any]``
Dictionary that maps package name to dummy release file.
Packages that are not in this dictionary will return
@@ -34,13 +34,14 @@ def __eq__(self, other):
async def _mock_get_pypi_json(pkgname, **kwargs):
if pkgname in pkg_map:
- pkg_file = pkg_map[pkgname]
+ pkg_file = pkg_map[pkgname]["name"]
+ pkg_version = pkg_map[pkgname].get("version", "1.0.0")
else:
pkg_file = f"{pkgname}-1.0.0.tar.gz"
-
+ pkg_version = "1.0.0"
return {
"releases": {
- "1.0.0": [
+ pkg_version: [
{
"filename": pkg_file,
"url": "",
@@ -333,7 +334,7 @@ def test_install_keep_going(monkeypatch):
dummy_pkg_name = "dummy"
_mock_get_pypi_json = mock_get_pypi_json(
- {dummy_pkg_name: f"{dummy_pkg_name}-1.0.0-py3-none-any.whl"}
+ {dummy_pkg_name: {"name": f"{dummy_pkg_name}-1.0.0-py3-none-any.whl"}}
)
_mock_fetch_bytes = mock_fetch_bytes(
dummy_pkg_name, "Requires-Dist: dep1\nRequires-Dist: dep2\n\nUNKNOWN"
@@ -350,6 +351,51 @@ def test_install_keep_going(monkeypatch):
)
+def test_install_version_compare_prerelease(monkeypatch):
+ pytest.importorskip("packaging")
+ from micropip import _micropip
+
+ dummy_pkg_name = "dummy"
+ version_new = "3.2.1a1"
+ version_old = "3.2.0"
+
+ _mock_get_pypi_json_new = mock_get_pypi_json(
+ {
+ dummy_pkg_name: {
+ "name": f"{dummy_pkg_name}-{version_new}-py3-none-any.whl",
+ "version": version_new,
+ }
+ }
+ )
+
+ _mock_get_pypi_json_old = mock_get_pypi_json(
+ {
+ dummy_pkg_name: {
+ "name": f"{dummy_pkg_name}-{version_old}-py3-none-any.whl",
+ "version": version_old,
+ }
+ }
+ )
+
+ _mock_fetch_bytes = mock_fetch_bytes(dummy_pkg_name, "UNKNOWN")
+
+ monkeypatch.setattr(_micropip, "fetch_bytes", _mock_fetch_bytes)
+
+ monkeypatch.setattr(_micropip, "_get_pypi_json", _mock_get_pypi_json_new)
+ asyncio.get_event_loop().run_until_complete(
+ _micropip.install(f"{dummy_pkg_name}=={version_new}")
+ )
+
+ monkeypatch.setattr(_micropip, "_get_pypi_json", _mock_get_pypi_json_old)
+ asyncio.get_event_loop().run_until_complete(
+ _micropip.install(f"{dummy_pkg_name}>={version_old}")
+ )
+
+ installed_pkgs = _micropip._list()
+ # Older version should not be installed
+ assert installed_pkgs[dummy_pkg_name].version == version_new
+
+
def test_install_no_deps(monkeypatch):
pytest.importorskip("packaging")
from micropip import _micropip
@@ -399,7 +445,7 @@ def test_list_pypi_package(monkeypatch):
dummy_pkg_name = "dummy"
_mock_get_pypi_json = mock_get_pypi_json(
- {dummy_pkg_name: f"{dummy_pkg_name}-1.0.0-py3-none-any.whl"}
+ {dummy_pkg_name: {"name": f"{dummy_pkg_name}-1.0.0-py3-none-any.whl"}}
)
_mock_fetch_bytes = mock_fetch_bytes(dummy_pkg_name, "UNKNOWN")
| micropip didn't resolve package versions correctly
## π Bug
Hi, I am using micropip to installs two pure python packages, but got the error message complaining:
```
ValueError: Requested 'imjoy-rpc>=0.5.7', but imjoy-rpc==0.5.10a2 is already installed
```
<!-- A clear and concise description of what the bug is. -->
### To Reproduce
<!-- Minimal code example to reproduce the bug. -->
```
Welcome to the Pyodide terminal emulator π
Python 3.10.2 (main, May 4 2022 10:50:05) on WebAssembly VM
Type "help", "copyright", "credits" or "license" for more information.
>>> import micropip
>>> await micropip.install("imjoy-rpc==0.5.10a2")
>>> await micropip.install("imjoy-rpc>=0.5.7")
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/lib/python3.10/asyncio/futures.py", line 284, in __await__
yield self # This tells Task to wait for completion.
File "/lib/python3.10/asyncio/tasks.py", line 304, in __wakeup
future.result()
File "/lib/python3.10/asyncio/futures.py", line 201, in result
raise self._exception
File "/lib/python3.10/asyncio/tasks.py", line 234, in __step
result = coro.throw(exc)
File "/lib/python3.10/site-packages/micropip/_micropip.py", line 192, in install
transaction = await self.gather_requirements(
File "/lib/python3.10/site-packages/micropip/_micropip.py", line 174, in gather_requirements
await gather(*requirement_promises)
File "/lib/python3.10/asyncio/futures.py", line 284, in __await__
yield self # This tells Task to wait for completion.
File "/lib/python3.10/asyncio/tasks.py", line 304, in __wakeup
future.result()
File "/lib/python3.10/asyncio/futures.py", line 201, in result
raise self._exception
File "/lib/python3.10/asyncio/tasks.py", line 232, in __step
result = coro.send(None)
File "/lib/python3.10/site-packages/micropip/_micropip.py", line 302, in add_requirement
raise ValueError(
ValueError: Requested 'imjoy-rpc>=0.5.7', but imjoy-rpc==0.5.10a2 is already installed
```
### Expected behavior
When I run the second install command, it should know that the version is already satisfied and do nothing instead of generating an error.
### Environment
Latest version at https://pyodide-cdn2.iodide.io/dev/full/console.html
| Thanks for the report! Probably it did something like `'0.5.10' >= '0.5.7'` and got `False`.
> Thanks for the report! Probably it did something like `'0.5.10' >= '0.5.7'` and got `False`.
FYI: also failed if we install the later one without specifying the version, it appears that it only matches the exact string.
https://github.com/pypa/packaging/blob/ba124324e866e518ebfe0378a25b6ba4816e5880/packaging/specifiers.py#L185-L189
Setting `req.prereleases = True` will work I guess. | 2022-05-09T05:22:58 |
pyodide/pyodide | 2,551 | pyodide__pyodide-2551 | [
"2436"
] | c6f6166ef2245ac4f670ad01d8e26b98891dca2c | diff --git a/pyodide-build/pyodide_build/buildpkg.py b/pyodide-build/pyodide_build/buildpkg.py
--- a/pyodide-build/pyodide_build/buildpkg.py
+++ b/pyodide-build/pyodide_build/buildpkg.py
@@ -145,12 +145,18 @@ def get_bash_runner():
"SIDE_MODULE_CFLAGS",
"SIDE_MODULE_LDFLAGS",
"STDLIB_MODULE_CFLAGS",
- "OPEN_SSL_ROOT",
"UNISOLATED_PACKAGES",
+ "WASM_LIBRARY_DIR",
+ "WASM_PKG_CONFIG_PATH",
]
} | {"PYODIDE": "1"}
if "PYODIDE_JOBS" in os.environ:
env["PYODIDE_JOBS"] = os.environ["PYODIDE_JOBS"]
+
+ env["PKG_CONFIG_PATH"] = env["WASM_PKG_CONFIG_PATH"]
+ if "PKG_CONFIG_PATH" in os.environ:
+ env["PKG_CONFIG_PATH"] += f":{os.environ['PKG_CONFIG_PATH']}"
+
with BashRunnerWithSharedEnvironment(env=env) as b:
b.run(
f"source {PYODIDE_ROOT}/emsdk/emsdk/emsdk_env.sh", stderr=subprocess.DEVNULL
| diff --git a/packages/lxml/test_lxml.py b/packages/lxml/test_lxml.py
new file mode 100644
--- /dev/null
+++ b/packages/lxml/test_lxml.py
@@ -0,0 +1,18 @@
+from pyodide_test_runner import run_in_pyodide
+
+
+@run_in_pyodide(packages=["lxml"])
+def test_lxml():
+ from lxml import etree
+
+ root = etree.XML(
+ """<root>
+ <TEXT1 class="myitem">one</TEXT1>
+ <TEXT2 class="myitem">two</TEXT2>
+ <TEXT3 class="myitem">three</TEXT3>
+ <v-TEXT4 class="v-list">four</v-TEXT4>
+ </root>"""
+ )
+
+ items = root.xpath("//*[@class='myitem']")
+ assert ["one", "two", "three"] == [item.text for item in items]
diff --git a/packages/pyyaml/test_pyyaml.py b/packages/pyyaml/test_pyyaml.py
new file mode 100644
--- /dev/null
+++ b/packages/pyyaml/test_pyyaml.py
@@ -0,0 +1,16 @@
+from pyodide_test_runner import run_in_pyodide
+
+
+@run_in_pyodide(packages=["pyyaml"])
+def test_pyyaml():
+ import yaml
+ from yaml import CLoader as Loader
+
+ document = """
+ - Hesperiidae
+ - Papilionidae
+ - Apatelodidae
+ - Epiplemidae
+ """
+ loaded = yaml.load(document, Loader=Loader)
+ assert loaded == ["Hesperiidae", "Papilionidae", "Apatelodidae", "Epiplemidae"]
| BLD Provide a way to easily specify library paths in `meta.yaml`
## Problem
When building Python packages that are dependent on some C/C++libraries,
we often hard-code paths of library headers (.h) / static binaries (.a).
This is annoying because when we update a library, all hard-coded paths need to be updated.
For example, in `lxml` package, there are a bunch of hard-coded paths.
https://github.com/pyodide/pyodide/blob/3f9ae2ed1e1aee688251af86a4dc23d3d867740c/packages/lxml/meta.yaml#L8-L22
## Possible solutions
1. Put every library headers / binaries into a specific build directory.
e.g. put headers to `e.g. ${PYODIDE_ROOT}/build/pyodide/include` and static binaries to `${PYODIDE_ROOT}/build/pyodide/lib`.
Then inside `meta.yaml`, those can be easily specified such as
```
-I$(PYODIDE_INCLUDE_PATH)
-L$(PYODIDE_LIBRARY_PATH)
-lxml
```
2. Add a variable for each library.
For example,
```
-I$(LIBRARY_libxml)/include
```
instead of,
```
-I$(PYODIDE_ROOT)/packages/libxml/build/libxml-2.9.10/include
```
| 2022-05-12T06:44:55 |
|
pyodide/pyodide | 2,754 | pyodide__pyodide-2754 | [
"2752"
] | d80f702a7e47a83ce685851f87e61c170fd4d688 | diff --git a/packages/micropip/src/micropip/_micropip.py b/packages/micropip/src/micropip/_micropip.py
--- a/packages/micropip/src/micropip/_micropip.py
+++ b/packages/micropip/src/micropip/_micropip.py
@@ -2,6 +2,7 @@
import hashlib
import importlib
import json
+import warnings
from asyncio import gather
from dataclasses import dataclass, field
from importlib.metadata import PackageNotFoundError
@@ -202,6 +203,13 @@ def find_wheel(metadata: dict[str, Any], req: Requirement) -> WheelInfo:
reverse=True,
)
for ver in candidate_versions:
+ if str(ver) not in releases:
+ pkg_name = metadata.get("info", {}).get("name", "UNKNOWN")
+ warnings.warn(
+ f"The package '{pkg_name}' contains an invalid version: '{ver}'. This version will be skipped"
+ )
+ continue
+
release = releases[str(ver)]
for fileinfo in release:
url = fileinfo["url"]
| diff --git a/packages/micropip/test_micropip.py b/packages/micropip/test_micropip.py
--- a/packages/micropip/test_micropip.py
+++ b/packages/micropip/test_micropip.py
@@ -543,6 +543,27 @@ async def test_install_pre(
assert micropip.list()[dummy].version == version_should_select
[email protected]
[email protected]("ignore::Warning")
[email protected]("version_invalid", ["1.2.3-1", "2.3.1-post1", "3.2.1-pre1"])
+async def test_install_version_invalid_pep440(
+ mock_fetch: mock_fetch_cls,
+ version_invalid: str,
+) -> None:
+ # Micropip should skip package versions which do not follow PEP 440.
+ #
+ # [N!]N(.N)*[{a|b|rc}N][.postN][.devN]
+ #
+
+ dummy = "dummy"
+ version_stable = "1.0.0"
+
+ mock_fetch.add_pkg_version(dummy, version_stable)
+ mock_fetch.add_pkg_version(dummy, version_invalid)
+ await micropip.install(dummy)
+ assert micropip.list()[dummy].version == version_stable
+
+
@pytest.mark.asyncio
async def test_fetch_wheel_fail(monkeypatch, wheel_base):
pytest.importorskip("packaging")
| Micropip doesn't handle post releases correctly
`await micropip.install("xo-gd==3.2.2.post4")` gives
```
File "/lib/python3.10/site-packages/micropip/_micropip.py", line 345, in find_wheel
release = releases[str(ver)]
KeyError: '3.2.2.post4'
```
it looks like the data is actually stored under the key `3.2.2-4`.
| Does `xo-gd==3.2.2.post4` actually exist? I think I can't install it locally with `pip`.
I was able to install some other packages with `.post` release.
```py
import micropip
await micropip.install("pytorch-lightning-bolts==0.3.2.post1", deps=False)
```
Huh. `await micropip.install("xo-gd")` tries to install 3.2.2.post4
Okay, it seem like `packaging.version.Version` converts `3.2.2-4` to `3.2.2.post4`...
```py
>>> from packaging.version import Version
>>> Version("3.2.2-4")
<Version('3.2.2.post4')>
```
Seems like maybe the problem here is that you shouldn't name your versions like that, so we should close this as wont-fix.
I think I can fix that, I'll open a PR.
Is this a valid version number though?
No it is not. `pip` is not possible to install `xo-gd==3.2.2-4`. It seems like it has invalid metadata...
```sh
$ pip install xo-gd==3.2.2-4
ERROR: Could not find a version that satisfies the requirement xo-gd==3.2.2-4 (from versions: 3.1.4.1, 3.1.4.2, 3.1.4.3, 3.1.4.11, 3.1.4.12, 3.1.4.35, 3.1.5, 3.1.5.1, 3.1.5.3, 3.1.5.4, 3.1.5.5, 3.1.5.6, 3.2.1, 3.2.1.1, 3.2.2.post5, 3.2.3, 3.2.3.2, 3.2.4, 3.2.5, 3.2.6, 3.2.7, 3.2.8)
ERROR: No matching distribution found for xo-gd==3.2.2-4
```
```sh
$ pip install xo-gd==3.2.2.post5
Collecting xo-gd==3.2.2.post5
Using cached xo-gd-3.2.2-5.tar.gz (31 kB)
Preparing metadata (setup.py) ... done
WARNING: Requested xo-gd==3.2.2.post5 from https://files.pythonhosted.org/packages/17/66/b21ac5a35d14328f8369fc60fcb43a5584282b618cf3627251649769b1b5/xo-gd-3.2.2-5.tar.gz#sha256=21d5feee32f6f510ff6fbc3f02f0d02d1cadaab1b46bf14c5ee5884977273a0c, but installing version 3.2.2-4
Discarding https://files.pythonhosted.org/packages/17/66/b21ac5a35d14328f8369fc60fcb43a5584282b618cf3627251649769b1b5/xo-gd-3.2.2-5.tar.gz#sha256=21d5feee32f6f510ff6fbc3f02f0d02d1cadaab1b46bf14c5ee5884977273a0c (from https://pypi.org/simple/xo-gd/): Requested xo-gd==3.2.2.post5 from https://files.pythonhosted.org/packages/17/66/b21ac5a35d14328f8369fc60fcb43a5584282b618cf3627251649769b1b5/xo-gd-3.2.2-5.tar.gz#sha256=21d5feee32f6f510ff6fbc3f02f0d02d1cadaab1b46bf14c5ee5884977273a0c has inconsistent version: filename has '3.2.2.post5', but metadata has '3.2.2.post4'
ERROR: Could not find a version that satisfies the requirement xo-gd==3.2.2.post5 (from versions: 3.1.4.1, 3.1.4.2, 3.1.4.3, 3.1.4.11, 3.1.4.12, 3.1.4.35, 3.1.5, 3.1.5.1, 3.1.5.3, 3.1.5.4, 3.1.5.5, 3.1.5.6, 3.2.1, 3.2.1.1, 3.2.2.post5, 3.2.3, 3.2.3.2, 3.2.4, 3.2.5, 3.2.6, 3.2.7, 3.2.8)
ERROR: No matching distribution found for xo-gd==3.2.2.post5
```
So, I think we should discard that version if it has an invalid version string and look for other versions, as people can make a mistake anytime. | 2022-06-21T04:40:32 |
pyodide/pyodide | 2,767 | pyodide__pyodide-2767 | [
"2731"
] | 8bfce929862e3600961459ec7a787e2116fb2ca3 | diff --git a/packages/micropip/src/micropip/_compat_in_pyodide.py b/packages/micropip/src/micropip/_compat_in_pyodide.py
--- a/packages/micropip/src/micropip/_compat_in_pyodide.py
+++ b/packages/micropip/src/micropip/_compat_in_pyodide.py
@@ -1,3 +1,7 @@
+from io import BytesIO
+from typing import IO
+from urllib.parse import urlparse
+
from pyodide._core import IN_BROWSER
from pyodide.http import pyfetch
@@ -14,10 +18,15 @@
# Otherwise, this is pytest test collection so let it go.
-async def fetch_bytes(url: str, kwargs: dict[str, str]) -> bytes:
- if url.startswith("file://"):
- return (await loadBinaryFile("", url)).to_bytes()
- return await (await pyfetch(url, **kwargs)).bytes()
+async def fetch_bytes(url: str, kwargs: dict[str, str]) -> IO[bytes]:
+ parsed_url = urlparse(url)
+ if parsed_url.scheme == "emfs":
+ return open(parsed_url.path, "rb")
+ if parsed_url.scheme == "file":
+ result_bytes = (await loadBinaryFile("", parsed_url.path)).to_bytes()
+ else:
+ result_bytes = await (await pyfetch(url, **kwargs)).bytes()
+ return BytesIO(result_bytes)
async def fetch_string(url: str, kwargs: dict[str, str]) -> str:
diff --git a/packages/micropip/src/micropip/_compat_not_in_pyodide.py b/packages/micropip/src/micropip/_compat_not_in_pyodide.py
--- a/packages/micropip/src/micropip/_compat_not_in_pyodide.py
+++ b/packages/micropip/src/micropip/_compat_not_in_pyodide.py
@@ -1,4 +1,5 @@
-from typing import Any
+from io import BytesIO
+from typing import IO, Any
REPODATA_PACKAGES: dict[str, dict[str, Any]] = {}
@@ -12,12 +13,12 @@ def to_py():
from urllib.request import Request, urlopen
-async def fetch_bytes(url: str, kwargs: dict[str, str]) -> bytes:
- return urlopen(Request(url, headers=kwargs)).read()
+async def fetch_bytes(url: str, kwargs: dict[str, str]) -> IO[bytes]:
+ return BytesIO(urlopen(Request(url, headers=kwargs)).read())
async def fetch_string(url: str, kwargs: dict[str, str]) -> str:
- return (await fetch_bytes(url, kwargs)).decode()
+ return (await fetch_bytes(url, kwargs)).read().decode()
async def loadDynlib(dynlib: str, is_shared_lib: bool) -> None:
diff --git a/packages/micropip/src/micropip/_micropip.py b/packages/micropip/src/micropip/_micropip.py
--- a/packages/micropip/src/micropip/_micropip.py
+++ b/packages/micropip/src/micropip/_micropip.py
@@ -8,9 +8,9 @@
from importlib.metadata import PackageNotFoundError
from importlib.metadata import distributions as importlib_distributions
from importlib.metadata import version as importlib_version
-from io import BytesIO
from pathlib import Path
-from typing import Any
+from typing import IO, Any
+from urllib.parse import ParseResult, urlparse
from zipfile import ZipFile
from packaging.markers import default_environment
@@ -55,9 +55,10 @@ class WheelInfo:
build: tuple[int, str] | tuple[()]
tags: frozenset[Tag]
url: str
+ parsed_url: ParseResult
project_name: str | None = None
digests: dict[str, str] | None = None
- data: BytesIO | None = None
+ data: IO[bytes] | None = None
_dist: Any = None
dist_info: Path | None = None
_requires: list[Requirement] | None = None
@@ -68,7 +69,8 @@ def from_url(url: str) -> "WheelInfo":
See https://www.python.org/dev/peps/pep-0427/#file-name-convention
"""
- file_name = Path(url).name
+ parsed_url = urlparse(url)
+ file_name = Path(parsed_url.path).name
name, version, build, tags = parse_wheel_filename(file_name)
return WheelInfo(
name=name,
@@ -77,6 +79,7 @@ def from_url(url: str) -> "WheelInfo":
build=build,
tags=tags,
url=url,
+ parsed_url=parsed_url,
)
def is_compatible(self):
@@ -87,23 +90,27 @@ def is_compatible(self):
return True
return False
- async def download(self, fetch_kwargs):
+ async def _fetch_bytes(self, fetch_kwargs):
try:
- wheel_bytes = await fetch_bytes(self.url, fetch_kwargs)
+ return await fetch_bytes(self.url, fetch_kwargs)
except OSError as e:
- if self.url.startswith("https://files.pythonhosted.org/"):
+ if self.parsed_url.hostname in [
+ "files.pythonhosted.org",
+ "cdn.jsdelivr.net",
+ ]:
raise e
else:
raise ValueError(
- f"Can't fetch wheel from '{self.url}'."
+ f"Can't fetch wheel from '{self.url}'. "
"One common reason for this is when the server blocks "
- "Cross-Origin Resource Sharing (CORS)."
+ "Cross-Origin Resource Sharing (CORS). "
"Check if the server is sending the correct 'Access-Control-Allow-Origin' header."
) from e
- self.data = BytesIO(wheel_bytes)
-
- with ZipFile(self.data) as zip_file:
+ async def download(self, fetch_kwargs):
+ data = await self._fetch_bytes(fetch_kwargs)
+ self.data = data
+ with ZipFile(data) as zip_file:
self._dist = pkg_resources_distribution_for_wheel(
zip_file, self.name, "???"
)
@@ -117,11 +124,10 @@ def validate(self):
# No checksums available, e.g. because installing
# from a different location than PyPI.
return
- sha256 = self.digests["sha256"]
- m = hashlib.sha256()
+ sha256_expected = self.digests["sha256"]
assert self.data
- m.update(self.data.getvalue())
- if m.hexdigest() != sha256:
+ sha256_actual = _generate_package_hash(self.data)
+ if sha256_actual != sha256_expected:
raise ValueError("Contents don't match hash")
def extract(self, target: Path) -> None:
@@ -398,9 +404,9 @@ async def install(
See :ref:`loading packages <loading_packages>` for more information.
- This only works for packages that are either pure Python or for packages
- with C extensions that are built in Pyodide. If a pure Python package is not
- found in the Pyodide repository it will be loaded from PyPI.
+ If a package is not found in the Pyodide repository it will be loaded from
+ PyPI. Micropip can only load pure Python packages or for packages with C
+ extensions that are built for Pyodide.
When used in web browsers, downloads from PyPI will be cached. When run in
Node.js, packages are currently not cached, and will be re-downloaded each
@@ -411,15 +417,28 @@ async def install(
requirements : ``str | List[str]``
A requirement or list of requirements to install. Each requirement is a
- string, which should be either a package name or URL to a wheel:
+ string, which should be either a package name or a wheel URI:
+
+ - If the requirement does not end in ``.whl``, it will be interpreted as
+ a package name. A package with this name must either be present
+ in the Pyodide lock file or on PyPI.
+
+ - If the requirement ends in ``.whl``, it is a wheel URI. The part of
+ the requirement after the last ``/`` must be a valid wheel name in
+ compliance with the `PEP 427 naming convention
+ <https://www.python.org/dev/peps/pep-0427/#file-format>`_.
+
+ - If a wheel URI starts with ``emfs:``, it will be interpreted as a path
+ in the Emscripten file system (Pyodide's file system). E.g.,
+ `emfs:../relative/path/wheel.whl` or `emfs:/absolute/path/wheel.whl`.
+ In this case, only .whl files are supported.
+
+ - If a wheel URI requirement starts with ``http:`` or ``https:`` it will
+ be interpreted as a URL.
- - If the requirement ends in ``.whl`` it will be interpreted as a URL.
- The file must be a wheel named in compliance with the
- `PEP 427 naming convention <https://www.python.org/dev/peps/pep-0427/#file-format>`_.
+ - In node, you can access the native file system using a URI that starts
+ with ``file:``. In the browser this will not work.
- - If the requirement does not end in ``.whl``, it will interpreted as the
- name of a package. A package by this name must either be present in the
- Pyodide repository at :any:`indexURL <globalThis.loadPyodide>` or on PyPI
keep_going : ``bool``, default: False
@@ -427,28 +446,29 @@ async def install(
Python package without a pure Python wheel while doing dependency
resolution:
- - If ``False``, an error will be raised on first package with a missing wheel.
+ - If ``False``, an error will be raised on first package with a missing
+ wheel.
- - If ``True``, the micropip will keep going after the first error, and report a list
- of errors at the end.
+ - If ``True``, the micropip will keep going after the first error, and
+ report a list of errors at the end.
deps : ``bool``, default: True
- If ``True``, install dependencies specified in METADATA file for
- each package. Otherwise do not install dependencies.
+ If ``True``, install dependencies specified in METADATA file for each
+ package. Otherwise do not install dependencies.
credentials : ``Optional[str]``
This parameter specifies the value of ``credentials`` when calling the
- `fetch() <https://developer.mozilla.org/en-US/docs/Web/API/fetch>`__ function
- which is used to download the package.
+ `fetch() <https://developer.mozilla.org/en-US/docs/Web/API/fetch>`__
+ function which is used to download the package.
When not specified, ``fetch()`` is called without ``credentials``.
pre : ``bool``, default: False
- If ``True``, include pre-release and development versions.
- By default, micropip only finds stable versions.
+ If ``True``, include pre-release and development versions. By default,
+ micropip only finds stable versions.
Returns
-------
@@ -512,7 +532,7 @@ async def install(
await gather(*wheel_promises)
-def _generate_package_hash(data: BytesIO) -> str:
+def _generate_package_hash(data: IO[bytes]) -> str:
sha256_hash = hashlib.sha256()
data.seek(0)
while chunk := data.read(4096):
| diff --git a/packages/micropip/test_micropip.py b/packages/micropip/test_micropip.py
--- a/packages/micropip/test_micropip.py
+++ b/packages/micropip/test_micropip.py
@@ -153,19 +153,19 @@ async def _fetch_bytes(self, url, kwargs):
metadata_dir = f"{name}-{version}.dist-info"
- with io.BytesIO() as tmp:
- with zipfile.ZipFile(tmp, "w", zipfile.ZIP_DEFLATED) as archive:
+ tmp = io.BytesIO()
+ with zipfile.ZipFile(tmp, "w", zipfile.ZIP_DEFLATED) as archive:
- def write_file(filename, contents):
- archive.writestr(f"{metadata_dir}/{filename}", contents)
+ def write_file(filename, contents):
+ archive.writestr(f"{metadata_dir}/{filename}", contents)
- write_file("METADATA", metadata_str)
- write_file("WHEEL", "Wheel-Version: 1.0")
- write_file("top_level.txt", toplevel_str)
+ write_file("METADATA", metadata_str)
+ write_file("WHEEL", "Wheel-Version: 1.0")
+ write_file("top_level.txt", toplevel_str)
- tmp.seek(0)
+ tmp.seek(0)
- return tmp.read()
+ return tmp
@pytest.fixture
@@ -193,6 +193,9 @@ def selenium_standalone_micropip(selenium_standalone):
yield selenium_standalone
+SNOWBALL_WHEEL = "snowballstemmer-2.0.0-py2.py3-none-any.whl"
+
+
def test_install_simple(selenium_standalone_micropip):
selenium = selenium_standalone_micropip
assert (
@@ -215,27 +218,45 @@ def test_install_simple(selenium_standalone_micropip):
)
-def test_parse_wheel_url():
[email protected](
+ "path",
+ [
+ SNOWBALL_WHEEL,
+ f"/{SNOWBALL_WHEEL}" f"a/{SNOWBALL_WHEEL}",
+ f"/a/{SNOWBALL_WHEEL}",
+ f"//a/{SNOWBALL_WHEEL}",
+ ],
+)
[email protected]("protocol", ["https:", "file:", "emfs:", ""])
+def test_parse_wheel_url1(protocol, path):
pytest.importorskip("packaging")
from micropip._micropip import WheelInfo
- url = "https://a/snowballstemmer-2.0.0-py2.py3-none-any.whl"
+ url = protocol + path
wheel = WheelInfo.from_url(url)
assert wheel.name == "snowballstemmer"
assert str(wheel.version) == "2.0.0"
assert wheel.digests is None
- assert wheel.filename == "snowballstemmer-2.0.0-py2.py3-none-any.whl"
+ assert wheel.filename == SNOWBALL_WHEEL
assert wheel.url == url
assert wheel.tags == frozenset(
{Tag("py2", "none", "any"), Tag("py3", "none", "any")}
)
+
+def test_parse_wheel_url2():
+ from micropip._micropip import WheelInfo
+
msg = r"Invalid wheel filename \(wrong number of parts\)"
with pytest.raises(ValueError, match=msg):
url = "https://a/snowballstemmer-2.0.0-py2.whl"
- wheel = WheelInfo.from_url(url)
+ WheelInfo.from_url(url)
+
+
+def test_parse_wheel_url3():
+ from micropip._micropip import WheelInfo
- url = "http://scikit_learn-0.22.2.post1-cp35-cp35m-macosx_10_9_intel.whl"
+ url = "http://a/scikit_learn-0.22.2.post1-cp35-cp35m-macosx_10_9_intel.whl"
wheel = WheelInfo.from_url(url)
assert wheel.name == "scikit-learn"
assert wheel.tags == frozenset({Tag("cp35", "cp35m", "macosx_10_9_intel")})
@@ -248,7 +269,7 @@ def test_install_custom_url(selenium_standalone_micropip, base_url):
with spawn_web_server(Path(__file__).parent / "test") as server:
server_hostname, server_port, _ = server
base_url = f"http://{server_hostname}:{server_port}/"
- url = base_url + "snowballstemmer-2.0.0-py2.py3-none-any.whl"
+ url = base_url + SNOWBALL_WHEEL
selenium.run_js(
f"""
@@ -271,7 +292,7 @@ def test_install_file_protocol_node(selenium_standalone_micropip):
f"""
await pyodide.runPythonAsync(`
import micropip
- await micropip.install('file://{pyparsing_wheel_name}')
+ await micropip.install('file:{pyparsing_wheel_name}')
import pyparsing
`);
"""
@@ -301,7 +322,7 @@ async def test_add_requirement():
with spawn_web_server(Path(__file__).parent / "test") as server:
server_hostname, server_port, _ = server
base_url = f"http://{server_hostname}:{server_port}/"
- url = base_url + "snowballstemmer-2.0.0-py2.py3-none-any.whl"
+ url = base_url + SNOWBALL_WHEEL
transaction = create_transaction(Transaction)
await transaction.add_requirement(url)
@@ -309,7 +330,7 @@ async def test_add_requirement():
wheel = transaction.wheels[0]
assert wheel.name == "snowballstemmer"
assert str(wheel.version) == "2.0.0"
- assert wheel.filename == "snowballstemmer-2.0.0-py2.py3-none-any.whl"
+ assert wheel.filename == SNOWBALL_WHEEL
assert wheel.url == url
assert wheel.tags == frozenset(
{Tag("py2", "none", "any"), Tag("py3", "none", "any")}
@@ -431,7 +452,7 @@ async def test_install_non_pure_python_wheel():
msg = "not a pure Python 3 wheel"
with pytest.raises(ValueError, match=msg):
- url = "http://scikit_learn-0.22.2.post1-cp35-cp35m-macosx_10_9_intel.whl"
+ url = "http://a/scikit_learn-0.22.2.post1-cp35-cp35m-macosx_10_9_intel.whl"
transaction = create_transaction(Transaction)
await transaction.add_requirement(url)
@@ -635,7 +656,7 @@ def test_list_load_package_from_url(selenium_standalone_micropip):
with spawn_web_server(Path(__file__).parent / "test") as server:
server_hostname, server_port, _ = server
base_url = f"http://{server_hostname}:{server_port}/"
- url = base_url + "snowballstemmer-2.0.0-py2.py3-none-any.whl"
+ url = base_url + SNOWBALL_WHEEL
selenium = selenium_standalone_micropip
selenium.run_js(
@@ -763,3 +784,29 @@ async def test_freeze(mock_fetch: mock_fetch_cls) -> None:
assert pkg_metadata["imports"] == toplevel[0]
assert dep1_metadata["imports"] == toplevel[1]
assert dep2_metadata["imports"] == toplevel[2]
+
+
+def test_emfs(selenium_standalone_micropip):
+ with spawn_web_server(Path(__file__).parent / "test") as server:
+ server_hostname, server_port, _ = server
+ url = f"http://{server_hostname}:{server_port}/"
+
+ @run_in_pyodide(packages=["micropip"])
+ async def run_test(selenium, url, wheel_name):
+ import micropip
+ from pyodide.http import pyfetch
+
+ resp = await pyfetch(url + wheel_name)
+ await resp._into_file(open(wheel_name, "wb"))
+ await micropip.install("emfs:" + wheel_name)
+ import snowballstemmer
+
+ stemmer = snowballstemmer.stemmer("english")
+ assert stemmer.stemWords("go going goes gone".split()) == [
+ "go",
+ "go",
+ "goe",
+ "gone",
+ ]
+
+ run_test(selenium_standalone_micropip, url, SNOWBALL_WHEEL)
diff --git a/pyodide-test-runner/pyodide_test_runner/decorator.py b/pyodide-test-runner/pyodide_test_runner/decorator.py
--- a/pyodide-test-runner/pyodide_test_runner/decorator.py
+++ b/pyodide-test-runner/pyodide_test_runner/decorator.py
@@ -74,11 +74,14 @@ def <func_name>(<selenium_arg_name>, arg1, arg2, arg3):
name=node.name, args=node_args, body=[], lineno=1, decorator_list=[]
)
+ run_test_id = "run-test-not-valid-identifier"
+
# Make onwards call with two args:
# 1. <selenium_arg_name>
# 2. all other arguments in a tuple
func_body = ast.parse("return run_test(selenium_arg_name, (arg1, arg2, ...))").body
onwards_call = func_body[0].value
+ onwards_call.func = ast.Name(id=run_test_id, ctx=ast.Load())
onwards_call.args[0].id = selenium_arg_name # Set variable name
onwards_call.args[1].elts = [ # Set tuple elements
ast.Name(id=arg.arg, ctx=ast.Load()) for arg in node_args.args[1:]
@@ -105,7 +108,7 @@ def fake_body_for_traceback(arg1, arg2, selenium_arg_name):
# Need to give our code access to the actual "run_test" object which it
# invokes.
- globs = {"run_test": run_test}
+ globs = {run_test_id: run_test}
exec(co, globs)
return globs[node.name]
@@ -251,12 +254,17 @@ def _generate_pyodide_ast(
):
nodes.append(node)
+ if (
+ node.end_lineno
+ and node.end_lineno > func_line_no
+ and node.lineno < func_line_no
+ ):
+ it = iter(node.body)
+ continue
+
# We also want the function definition for the current test
if not isinstance(node, (ast.FunctionDef, ast.AsyncFunctionDef)):
continue
- if node.end_lineno > func_line_no and node.lineno < func_line_no:
- it = iter(node.body)
- continue
if node.lineno < func_line_no:
continue
| Micropip should be able to install wheels from the file system
## π Feature
Micropip should be able to install a wheel which is in the emscripten file system from a path.
### Motivation
In node (and some day maybe in chrome), we can mount the local file system into the Emscripten file system. It would be useful to install these, for instance when testing whether a wheel was built correctly.
| Posting here for visibility.
The latest JupyterLite release adds support for accessing files from the Python (Pyodide-based) kernel:
- https://github.com/jupyterlite/jupyterlite/releases/tag/v0.1.0b9
- https://github.com/jupyterlite/jupyterlite/pull/655
While the mounting logic is a bit specific to JupyterLite / JupyterLab and enabled via a `ServiceWorker`, making `micropip` able to install packages from a local path would still be useful.
For example one could upload a wheel (or ship it as part of the default content) and install it with:
```py
import micropip
await micropip.install('./snowballstemmer-2.2.0-py2.py3-none-any.whl')
```

Maybe it can be `micropip.install_file`? `install_from_filesystem`?
Probably from a user point of view keeping the same `micropip.install()` would be more convenient? (like regular `pip`)
We currently interpret `micropip.install("a/b/c.whl")` as a relative URL, so there is no syntax space for that. Unless you want to do `micropip.install(from_path="a/b/c.whl")`?
What about intercepting `file://` before fetch as meaning _file in the WASM machine_? I'd think pyodide itself could not be usefully loaded when "hosted" from `file://` because `.json`, much less `.wasm`, would not be served properly. Further no _other_ protocol would be able to resolve it.
What about relative paths though?
Well, URIs don't _necessarily_ have to start with `<protocol>:/`: there's certainly precedent for non-slashed paths, e.g. python's [sqlite3]() module, which inherit's [sqlite's implementation](https://www.sqlite.org/uri.html):
```py
# Open a database in read-only mode.
con = sqlite3.connect("file:template.db?mode=ro", uri=True)
```
So, with the default settings, `file:some-package.whl` could be interpreted the same as `file:/home/pyodide/some-package.whl`.
Indeed, allowing `<protocol>` to be extended is a _whacking_ good idea: `ipfs://` springs to mind as an _almost perfect_ tool for peer-to-peer package distribution.
> What about intercepting file:// before fetch as meaning file in the WASM machine?
The problem is that when run in Node.js, `file://` [means host file system](https://nodejs.org/api/fs.html#file-url-paths). Though for Python `file://` would probably still mean the whatever FS Python is installed in. We would want to keep a consistent behavior in the browser and Node. Maybe `memfs://` at least as far as micropip is concerned?
hey guys! awesome to see this being on the table so recently
got here while searching for a way to use my own packages in pyscript
for example i've got a package called xo-gd, which i want to use, but i cant seem to import it
seems like this is a requested feature not yet available seemlessly,
does anyone know of a way for me to do this? do i need to build pyodide myself with this package? or should i just copy all the python files from my package locally ? (it's already all on pip, so i thought it should be straight-forward)
any info would be appreciated! thx & have a good one!
@fire17 Would you mind opening a separate issue about it? It's a bit orthogonal to the current discussion. The general documentation for adding a package can be found [here](https://pyodide.org/en/stable/development/new-packages.html). Looking at the xo-gd description, it looks like a lof of the functionality it exposes will not be supported in the browser (let's discuss the details in a separate issue).
I am leaning towards going with `micropip.install(from_path="path/to/some/wheel.whl")` for now. I guess this won't work with e.g., `py-env` or requirement lists but if we want some string prefix that indicates to install from the file system we can figure out the details a bit later. My immediate use case is getting CI to work for out of tree package builds which know that they are trying to load a wheel from the local file system.
I guess another thought is it might be worth having a list of local directories that micropip could automatically search when looking for a wheel. This would better handle the case where you have a wheel and several dependencies of the wheel all in the same folder and you want micropip to correctly resolve the dependencies and load all of them.
> micropip.install(from_path= [..] we can figure out the details a bit later.
Maybe someone should try to do a draft implementation. I suspect that having two inputs would make the implementation less straightforward as currently we just pass a list of requirements everywhere. Also, it's not very common in Python to have multiple and mutually exclusive input arguments, that's probably the reason for the URL file prefix, instead of doing `urlopen(from_http=None, from_https=None, from_ftp=None, etc)`.
For some things, we can figure out the details later, but micropip.install is likely one of the most frequently used functions by end users. So it would be good to agree on a public API and implement it, so we can avoid API breakages or deprecations for this (unless we can finish iterating on it before the next release).
Well what about going with @bollwyvl's suggestion. We could set it up so that `file://` makes a fetch request but `file:some/path` can goes to the local file system.
Or maybe `emfs:some/path`. I don't care for `memfs` because it is possible to mount other file systems e.g., `nodefs` and then it's a bit confusing. But they are all the Emscripten file system.
"emfs:" is concise yet distinctive. The unslashed-is-relative seems
reasonable as well.
I do still like the idea being able to (re)register of new protocol
handlers that can handle both wheels and warehouse-like API responses...
But "wheels on the disk" seems like a fine place to start.
OK, let's go with `emfs:`.
Actually I wasn't aware that,
> - The // after the file: denotes that either a hostname or the literal term localhost will follow,[[2]](https://en.wikipedia.org/wiki/File_URI_scheme#cite_note-2) although this part may be omitted entirely, or may contain an empty hostname.[[3]](https://en.wikipedia.org/wiki/File_URI_scheme#cite_note-3)
>- file://path (i.e. two slashes, without a hostname) is never correct, but is often used
https://en.wikipedia.org/wiki/File_URI_scheme
Yeah, URLs are definitely funny things. Here's a little with what `urlparse.parse` and `new URL` do with various things:
```py
from pathlib import Path
from urllib.parse import urlparse
import js
for p in ["emfs:foo.whl", "emfs:./foo.whl", "emfs:/home/pydodide/bar.whl", "emfs://home/pyodide/baz.whl"]:
print("\n---", p, "---")
py_parsed = urlparse(p)
js_parsed = js.eval(f"new URL('{p}')").to_py()
print("py parsed:", py_parsed)
print("... url:", py_parsed.geturl())
print("... path:", Path(py_parsed.path))
print("... abspath:", Path(py_parsed.path).resolve())
print("js parsed:", js_parsed)
print("... parts", f"protocol='{js_parsed.protocol}', origin='{js_parsed.origin}', pathname='{js_parsed.pathname}'")
```
```
--- emfs:foo.whl ---
py parsed: ParseResult(scheme='emfs', netloc='', path='foo.whl', params='', query='', fragment='')
... url: emfs:foo.whl
... path: foo.whl
... abspath: /home/pyodide/foo.whl
js parsed: emfs:foo.whl
... parts protocol='emfs:', origin='null', pathname='foo.whl'
--- emfs:./foo.whl ---
py parsed: ParseResult(scheme='emfs', netloc='', path='./foo.whl', params='', query='', fragment='')
... url: emfs:./foo.whl
... path: foo.whl
... abspath: /home/pyodide/foo.whl
js parsed: emfs:./foo.whl
... parts protocol='emfs:', origin='null', pathname='./foo.whl'
--- emfs:/home/pydodide/bar.whl ---
py parsed: ParseResult(scheme='emfs', netloc='', path='/home/pydodide/bar.whl', params='', query='', fragment='')
... url: emfs:/home/pydodide/bar.whl
... path: /home/pydodide/bar.whl
... abspath: /home/pydodide/bar.whl
js parsed: emfs:/home/pydodide/bar.whl
... parts protocol='emfs:', origin='null', pathname='/home/pydodide/bar.whl'
--- emfs://home/pyodide/baz.whl ---
py parsed: ParseResult(scheme='emfs', netloc='home', path='/pyodide/baz.whl', params='', query='', fragment='')
... url: emfs://home/pyodide/baz.whl
... path: /pyodide/baz.whl
... abspath: /pyodide/baz.whl
js parsed: emfs://home/pyodide/baz.whl
... parts protocol='emfs:', origin='null', pathname='//home/pyodide/baz.whl'
```
Thanks everyone for the discussion! | 2022-06-22T18:51:11 |
pyodide/pyodide | 2,913 | pyodide__pyodide-2913 | [
"2911"
] | 7231cab3ffc83f6221fafb7458f9b223d2a7c759 | diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -17,7 +17,7 @@
copyright = "2019-2022, Pyodide contributors and Mozilla"
pyodide_version = "0.21.0a3"
-if ".dev" in pyodide_version:
+if ".dev" in pyodide_version or os.environ.get("READTHEDOCS_VERSION") == "latest":
CDN_URL = "https://cdn.jsdelivr.net/pyodide/dev/full/"
else:
CDN_URL = f"https://cdn.jsdelivr.net/pyodide/v{pyodide_version}/full/"
| https://pyodide.org/en/latest/console.html doesn't show latest `main` version
## π Bug
The https://pyodide.org/en/latest/console.html console is stuck at `v0.21.0a3`. I believe this is because the version wasn't bumped to the next 'dev' version after the `v0.21.0a3` release, so somebody probably needs to run
```
./tools/bump_version.py --new-version 0.21.0.dev0
```
Without `dev` in the version, the documentation's console uses the release:
https://github.com/pyodide/pyodide/blob/7231cab3ffc83f6221fafb7458f9b223d2a7c759/docs/conf.py#L20-L23
### To Reproduce
Go to https://pyodide.org/en/latest/console.html and load a package added since v0.21.0a3, e.g., `import rebound`
| Thanks for the report @jobovy. Probably we should use `READTHEDOCS_VERSION` to check if we are building for a `latest` version. | 2022-07-26T06:47:39 |
|
pyodide/pyodide | 2,920 | pyodide__pyodide-2920 | [
"2918"
] | 21b592a6d2fd095e086e4387698cf80447d080a6 | diff --git a/pyodide-build/pyodide_build/pypabuild.py b/pyodide-build/pyodide_build/pypabuild.py
--- a/pyodide-build/pyodide_build/pypabuild.py
+++ b/pyodide-build/pyodide_build/pypabuild.py
@@ -58,7 +58,12 @@ def install_reqs(env: IsolatedEnv, reqs: set[str]) -> None:
# only recythonize if it is present. We need them to always recythonize so
# we always install cython. If the reqs included some cython version already
# then this won't do anything.
- env.install(["cython", "pythran"])
+ env.install(
+ [
+ "cython<0.29.31", # cython 0.29.31 is incompatible with scipy 1.8.1. TODO: remove this after the scipy update.
+ "pythran",
+ ]
+ )
def _build_in_isolated_env(
| Scipy build failing with Cython error
## π Bug
Scipy build is failing in [main](https://app.circleci.com/pipelines/github/ryanking13/pyodide/705/workflows/f75c57c6-95c0-4437-85ca-47889051954d/jobs/11444?invite=true#step-105-1365). I guess it is related to the new Cython release.
```
_stats.pyx:173:0: Referring to a memoryview typed argument directly in a nested closure function is not supported in Cython 0.x. Either upgrade to Cython 3, or assign the argument to a local variable and use that in the nested function.
Traceback (most recent call last):
File "/root/repo/packages/scipy/build/scipy-1.8.1/tools/cythonize.py", line 317, in <module>
main()
File "/root/repo/packages/scipy/build/scipy-1.8.1/tools/cythonize.py", line 313, in main
find_process_files(root_dir)
File "/root/repo/packages/scipy/build/scipy-1.8.1/tools/cythonize.py", line 302, in find_process_files
for result in pool.imap_unordered(lambda args: process(*args), jobs):
File "/usr/local/lib/python3.10/multiprocessing/pool.py", line 870, in next
raise value
File "/usr/local/lib/python3.10/multiprocessing/pool.py", line 125, in worker
result = (True, func(*args, **kwds))
File "/root/repo/packages/scipy/build/scipy-1.8.1/tools/cythonize.py", line 302, in <lambda>
for result in pool.imap_unordered(lambda args: process(*args), jobs):
File "/root/repo/packages/scipy/build/scipy-1.8.1/tools/cythonize.py", line 236, in process
processor_function(fromfile, tofile, cwd=path)
File "/root/repo/packages/scipy/build/scipy-1.8.1/tools/cythonize.py", line 102, in process_pyx
raise Exception('Cython failed')
Exception: Cython failed
```
| Upstream (scipy) issue tracking this bug: https://github.com/scipy/scipy/issues/16718
We should pin the cython version used, as it's likely to be forward incompatible, particularly once the long-awaited 3.0 version is released.
Or we can think of adding a new patch for scipy until they release 1.9.0.
As you prefer. But it's really upstream job to fix these things, we are just building the latest version that is known to work. So there is no real advantage for us to run the latest Cython in this particular case. | 2022-07-28T23:25:15 |
|
pyodide/pyodide | 2,935 | pyodide__pyodide-2935 | [
"2881"
] | 58ce5e0aa87e9d72dabb99a36bf931866b033557 | diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -15,7 +15,7 @@
project = "Pyodide"
copyright = "2019-2022, Pyodide contributors and Mozilla"
-pyodide_version = "0.21.0a3"
+pyodide_version = "0.21.0"
if ".dev" in pyodide_version or os.environ.get("READTHEDOCS_VERSION") == "latest":
CDN_URL = "https://cdn.jsdelivr.net/pyodide/dev/full/"
diff --git a/pyodide-build/pyodide_build/__init__.py b/pyodide-build/pyodide_build/__init__.py
--- a/pyodide-build/pyodide_build/__init__.py
+++ b/pyodide-build/pyodide_build/__init__.py
@@ -1 +1 @@
-__version__ = "0.21.0a3"
+__version__ = "0.21.0"
diff --git a/src/py/pyodide/__init__.py b/src/py/pyodide/__init__.py
--- a/src/py/pyodide/__init__.py
+++ b/src/py/pyodide/__init__.py
@@ -10,7 +10,7 @@
# This package is imported by the test suite as well, and currently we don't use
# pytest mocks for js or pyodide_js, so make sure to test "if IN_BROWSER" before
# importing from these.
-__version__ = "0.21.0a3"
+__version__ = "0.21.0"
__all__ = ["__version__"]
| Release 0.21
I went through issues and tagged things for the 0.21 release https://github.com/pyodide/pyodide/milestone/11
Mostly naming related subjects need to be discussed/addressed before the release IMO, since once we release they will be harder to change.
| Are there any remaining blockers for this @rth @ryanking13 ?
I think #2893 was the last blocker for this. | 2022-08-03T00:49:31 |
|
pyodide/pyodide | 2,939 | pyodide__pyodide-2939 | [
"2937"
] | d0cc45fa450604fcdaab59499b226f536028d7e3 | diff --git a/pyodide-build/pyodide_build/common.py b/pyodide-build/pyodide_build/common.py
--- a/pyodide-build/pyodide_build/common.py
+++ b/pyodide-build/pyodide_build/common.py
@@ -80,6 +80,7 @@ def find_matching_wheels(wheel_paths: Iterable[Path]) -> Iterator[Path]:
"sharedlib-test-py",
"cpp-exceptions-test",
"ssl",
+ "lzma",
"pytest",
"tblib",
}
| diff --git a/packages/lzma/test_lzma.py b/packages/lzma/test_lzma.py
new file mode 100644
--- /dev/null
+++ b/packages/lzma/test_lzma.py
@@ -0,0 +1,17 @@
+from pytest_pyodide import run_in_pyodide
+
+
+@run_in_pyodide(packages=["test", "lzma"], pytest_assert_rewrites=False)
+def test_lzma(selenium):
+ # TODO: libregrtest.main(["test_lzma"]) doesn't collect any tests for some unknown reason.
+
+ import test.test_lzma
+ import unittest
+
+ suite = unittest.TestSuite(
+ [unittest.TestLoader().loadTestsFromModule(test.test_lzma)]
+ )
+
+ runner = unittest.TextTestRunner(verbosity=2)
+ result = runner.run(suite)
+ assert result.wasSuccessful()
| Add lzma
As mentioned by @hoodmane in https://github.com/pyodide/pyodide/discussions/2930#discussioncomment-3316181
> Is there an issue open about lzma? What is our position on it again? That we want it but there is no emscripten port and we haven't gotten to it?
I think the main concern was the size increase for everyone vs few people actually needing it. Depending on the size maybe we could make it an unvendored stdlib package (or include by default if the size is negligible).
| 2022-08-05T00:04:55 |
|
pyodide/pyodide | 2,997 | pyodide__pyodide-2997 | [
"2923"
] | ad128337e8a0b7e9edb8d4ae4eb975858a84e94a | diff --git a/src/py/pyodide/webloop.py b/src/py/pyodide/webloop.py
--- a/src/py/pyodide/webloop.py
+++ b/src/py/pyodide/webloop.py
@@ -1,8 +1,10 @@
import asyncio
import contextvars
+import inspect
import sys
import time
import traceback
+from asyncio import Future
from collections.abc import Callable
from typing import Any
@@ -12,6 +14,108 @@
from js import setTimeout
+class PyodideFuture(Future[Any]):
+ """A future with extra then, catch, and finally_ methods based on the
+ Javascript promise API.
+ """
+
+ def then(
+ self,
+ onfulfilled: Callable[[Any], Any] | None,
+ onrejected: Callable[[Exception], Any] | None = None,
+ ) -> "PyodideFuture":
+ """When the Future is done, either execute onfulfilled with the result
+ or execute onrejected with the exception.
+
+ Returns a new Future which will be marked done when either the
+ onfulfilled or onrejected callback is completed. If the return value of
+ the executed callback is awaitable it will be awaited repeatedly until a
+ nonawaitable value is received. The returned Future will be resolved
+ with that value. If an error is raised, the returned Future will be
+ rejected with the error.
+
+ Parameters
+ ----------
+ onfulfilled:
+ A function called if the Future is fulfilled. This function receives
+ one argument, the fulfillment value.
+
+ onrejected:
+ A function called if the Future is rejected. This function receives
+ one argument, the rejection value.
+
+ Returns
+ -------
+ A new future to be resolved when the original future is done and the
+ appropriate callback is also done.
+ """
+ result = PyodideFuture()
+
+ onfulfilled_: Callable[[Any], Any]
+ onrejected_: Callable[[Any], Any]
+ if onfulfilled:
+ onfulfilled_ = onfulfilled
+ else:
+
+ def onfulfilled_(x):
+ return x
+
+ if onrejected:
+ onrejected_ = onrejected
+ else:
+
+ def onrejected_(x):
+ raise x
+
+ async def callback(fut: Future[Any]) -> Any:
+ if fut.exception():
+ val = fut.exception()
+ fun = onrejected_
+ else:
+ fun = onfulfilled_
+ val = fut.result()
+ try:
+ r = fun(val)
+ while inspect.isawaitable(r):
+ r = await r
+ except Exception as result_exception:
+ result.set_exception(result_exception)
+ return
+ result.set_result(r)
+
+ def wrapper(fut: Future[Any]) -> None:
+ asyncio.ensure_future(callback(fut))
+
+ self.add_done_callback(wrapper)
+ return result
+
+ def catch(self, onrejected: Callable[[Exception], Any] | None) -> "PyodideFuture":
+ return self.then(None, onrejected)
+
+ def finally_(self, onfinally: Callable[[], Any]) -> "PyodideFuture":
+ result = PyodideFuture()
+
+ async def callback(fut: Future[Any]) -> None:
+ exc = fut.exception()
+ try:
+ r = onfinally()
+ while inspect.isawaitable(r):
+ r = await r
+ except Exception as e:
+ result.set_exception(e)
+ return
+ if exc:
+ result.set_exception(exc)
+ else:
+ result.set_result(fut.result())
+
+ def wrapper(fut: Future[Any]) -> None:
+ asyncio.ensure_future(callback(fut))
+
+ self.add_done_callback(wrapper)
+ return result
+
+
class WebLoop(asyncio.AbstractEventLoop):
"""A custom event loop for use in Pyodide.
@@ -193,6 +297,10 @@ def run_in_executor(self, executor, func, *args):
fut.set_exception(e)
return fut
+ def create_future(self) -> asyncio.Future[Any]:
+ """Create a Future object attached to the loop."""
+ return PyodideFuture(loop=self)
+
#
# The remaining methods are copied directly from BaseEventLoop
#
@@ -208,13 +316,6 @@ def time(self) -> float:
"""
return time.monotonic()
- def create_future(self) -> asyncio.Future[Any]:
- """Create a Future object attached to the loop.
-
- Copied from ``BaseEventLoop.create_future``
- """
- return asyncio.futures.Future(loop=self)
-
def create_task(self, coro, *, name=None):
"""Schedule a coroutine object.
| diff --git a/src/tests/test_webloop.py b/src/tests/test_webloop.py
--- a/src/tests/test_webloop.py
+++ b/src/tests/test_webloop.py
@@ -1,4 +1,5 @@
import pytest
+from pytest_pyodide.decorator import run_in_pyodide
def run_with_resolve(selenium, code):
@@ -229,3 +230,119 @@ async def test():
)
finally:
selenium.run("loop.set_exception_handler(None)")
+
+
[email protected]
+async def test_pyodide_future():
+ import asyncio
+
+ from pyodide.webloop import PyodideFuture
+
+ fut = PyodideFuture()
+ increment = lambda x: x + 1
+ tostring = lambda x: repr(x)
+
+ def raises(x):
+ raise Exception(x)
+
+ rf = fut.then(increment).then(increment)
+ fut.set_result(5)
+ assert await rf == 7
+
+ e = Exception("oops")
+ fut = PyodideFuture()
+ rf = fut.then(increment, tostring)
+ fut.set_exception(e)
+ assert await rf == repr(e)
+
+ e = Exception("oops")
+ fut = PyodideFuture()
+ rf = fut.catch(tostring)
+ fut.set_exception(e)
+ assert await rf == repr(e)
+
+ async def f(x):
+ await asyncio.sleep(0.1)
+ return x + 1
+
+ fut = PyodideFuture()
+ rf = fut.then(f)
+
+ fut.set_result(6)
+ assert await rf == 7
+
+ fut = PyodideFuture()
+ rf = fut.then(raises)
+ fut.set_result(6)
+ try:
+ await rf
+ except Exception:
+ pass
+ assert repr(rf.exception()) == repr(Exception(6))
+
+ x = 0
+
+ def incx():
+ nonlocal x
+ x += 1
+
+ fut = PyodideFuture()
+ rf = fut.then(increment).then(increment).finally_(incx).finally_(incx)
+ assert x == 0
+ fut.set_result(5)
+ await rf
+ assert x == 2
+
+ fut = PyodideFuture()
+ rf = fut.then(increment).then(increment).finally_(incx).finally_(incx)
+ fut.set_exception(e)
+ try:
+ await rf
+ except Exception:
+ pass
+ assert x == 4
+
+ async def f1(x):
+ if x == 0:
+ return 7
+ await asyncio.sleep(0.1)
+ return f1(x - 1)
+
+ fut = PyodideFuture()
+ rf = fut.then(f1)
+ fut.set_result(3)
+ assert await rf == 7
+
+ async def f2():
+ await asyncio.sleep(0.1)
+ raise e
+
+ fut = PyodideFuture()
+ rf = fut.finally_(f2)
+ fut.set_result(3)
+ try:
+ await rf
+ except Exception:
+ pass
+ assert rf.exception() == e
+
+ fut = PyodideFuture()
+ rf = fut.finally_(f2)
+ fut.set_exception(Exception("oops!"))
+ try:
+ await rf
+ except Exception:
+ pass
+ assert rf.exception() == e
+
+
+@run_in_pyodide
+async def test_pyodide_future2(selenium):
+ from js import fetch
+
+ name = (
+ await fetch("https://pypi.org/pypi/pytest/json")
+ .then(lambda x: x.json())
+ .then(lambda x: x.info.name)
+ )
+ assert name == "pytest"
| 'Future' object has no attribute 'then'
## π Bug
If I understand correctly, starting with version 0.17, Pyodide integrates the implementation of `await` for JsProxy. So when JS returns a `Promise`, it converts it to `Future` in Python, which allows us to use `await`.
Hoverer, the `Future` object has no attribute `then`, so it is no longer possible to build `then`/`catch` chains like in older versions.
### To Reproduce
```python
import js
js.fetch('http://karay.me/truepyxel/test.json').then(lambda resp: resp.json()).then(lambda data: data.msg).catch(lambda err: 'there were error: '+err.message)
```
### Environment
- Pyodide Version: 0.20
- Browser version<!-- (e.g. Chrome 95.0.4638.54) -->: 103.0.5060.134
| Yes we should add `then`, `catch`, and `finally_` methods to the `Future`. | 2022-08-21T15:32:14 |
pyodide/pyodide | 2,998 | pyodide__pyodide-2998 | [
"2940"
] | fdf39f89efe0308ee091e884c151ce57fbe3168f | diff --git a/pyodide-build/pyodide_build/common.py b/pyodide-build/pyodide_build/common.py
--- a/pyodide-build/pyodide_build/common.py
+++ b/pyodide-build/pyodide_build/common.py
@@ -97,6 +97,9 @@ def find_matching_wheels(wheel_paths: Iterable[Path]) -> Iterator[Path]:
"micropip",
"distutils",
"test",
+ "ssl",
+ "lzma",
+ "sqlite3",
}
CORE_PACKAGES = {
@@ -109,11 +112,8 @@ def find_matching_wheels(wheel_paths: Iterable[Path]) -> Iterator[Path]:
"fpcast-test",
"sharedlib-test-py",
"cpp-exceptions-test",
- "ssl",
- "lzma",
"pytest",
"tblib",
- "sqlite3",
}
CORE_SCIPY_PACKAGES = {
diff --git a/src/py/_pyodide/_importhook.py b/src/py/_pyodide/_importhook.py
--- a/src/py/_pyodide/_importhook.py
+++ b/src/py/_pyodide/_importhook.py
@@ -138,6 +138,10 @@ def register_js_finder() -> None:
sys.meta_path.append(jsfinder)
+UNVENDORED_STDLIBS = ["distutils", "ssl", "lzma", "sqlite3"]
+UNVENDORED_STDLIBS_AND_TEST = UNVENDORED_STDLIBS + ["test"]
+
+
class UnvendoredStdlibFinder(MetaPathFinder):
"""
A MetaPathFinder that handles unvendored and removed stdlib modules.
@@ -150,15 +154,10 @@ class UnvendoredStdlibFinder(MetaPathFinder):
def __init__(self) -> None:
# `test`` is not a stdlib module, but we unvendors in anyway.
self.stdlibs = sys.stdlib_module_names | {"test"}
-
- # TODO: put list of unvendored stdlibs to somewhere else?
- self.unvendored_stdlibs = {
- "distutils",
- "test",
- "ssl",
- "lzma",
- "sqlite3",
- } & self.stdlibs
+ self.unvendored_stdlibs = set(UNVENDORED_STDLIBS_AND_TEST)
+ assert not (
+ self.unvendored_stdlibs - self.stdlibs
+ ), "unvendored stdlibs not in stdlibs"
def find_spec(
self,
| diff --git a/src/tests/test_pyodide.py b/src/tests/test_pyodide.py
--- a/src/tests/test_pyodide.py
+++ b/src/tests/test_pyodide.py
@@ -1130,6 +1130,27 @@ def test_sys_path0(selenium):
)
+def test_fullstdlib(selenium_standalone_noload):
+ selenium = selenium_standalone_noload
+ selenium.run_js(
+ """
+ let pyodide = await loadPyodide({
+ fullStdLib: true,
+ });
+
+ await pyodide.loadPackage("micropip");
+
+ pyodide.runPython(`
+ import _pyodide
+ import micropip
+ loaded_packages = micropip.list()
+ print(loaded_packages)
+ assert all((lib in micropip.list()) for lib in _pyodide._importhook.UNVENDORED_STDLIBS)
+ `);
+ """
+ )
+
+
@run_in_pyodide
def test_run_js(selenium):
from unittest import TestCase
| [Discussion] Changing the default behavior of loading unvendored stdlib
## Proposed refactoring or deprecation
- Related: #1542
We unvendors some stdlib packages in order to reduce the size of the main module.
- distutils
- tests
- ssl
- lzma (#2939)
- (maybe more in the future, e.g. sqlite3, unicodedata, ..., #1826)
The `pyodide.loadPyodide` API accepts `fullstdlib` parameter, which is currently true in default. What `fullstdlib` currently does is [load distutils package if true](https://github.com/pyodide/pyodide/blob/14708295cd1d2a38235e2dd3727ed7760e29eaa0/src/js/pyodide.ts#L319-L321). It does not affect loading other unvendored stdlibs like ssl, which I think is a bit confusing for users.
IMO not loading ssl and some other unvendored stdlibs are reasonable since the sizes of those packages are quite big.
```
960K distutils.tar
840K ssl-1.0.0.zip
```
So I think it would be great to
1. Change the default value of fullstdlib to βFalseβ.
2. If fullstdlib is True, load all unvendored packages including distutils, ssl, lzma (except for `test` which is too big (~22MB) and mostly not used by users).
3. Add unvendored stdlibs to βalways packagesβ when building.
WDYT?
| We should discuss this at https://discuss.python.org/c/webassembly/28. I wonder how scalable our current approach is and whether it wouldn't break people's assumptions about what the stdlib should contain (particularly now that Wasm is an officially supported CPython target).
> We should discuss this at https://discuss.python.org/c/webassembly/28. I wonder how scalable our current approach is and whether it wouldn't break people's assumptions about what the stdlib should contain (particularly now that Wasm is an officially supported CPython target).
Yes, I think you are right. Actually, I found that you have already opened a discussion about this last year: https://discuss.python.org/t/minifying-the-stdlib-in-pyodide/8414.
Opened a new discussion here: https://discuss.python.org/t/unvendoring-some-of-stdlib-modules/18076
Back in this topic, my opinion is that we can start with not loading distutils by default (i.e. setting default value of fullstdlib to false), while unvendoring other packages can be discussed case by case later.
Yes, agreed. It was good to get feedback from CPython devs. It sounded like there is no strong opposition to this idea but I agree that we should be restrained about the number of stdlib modules we unvendor otherwise user experience is going to be worse when installing dependencies. | 2022-08-22T00:47:07 |
pyodide/pyodide | 3,013 | pyodide__pyodide-3013 | [
"3011"
] | 7bb413d1782e66840d3cb90d36dcbcc6e2422113 | diff --git a/packages/micropip/src/micropip/_compat_in_pyodide.py b/packages/micropip/src/micropip/_compat_in_pyodide.py
--- a/packages/micropip/src/micropip/_compat_in_pyodide.py
+++ b/packages/micropip/src/micropip/_compat_in_pyodide.py
@@ -23,7 +23,7 @@ async def fetch_bytes(url: str, kwargs: dict[str, str]) -> IO[bytes]:
if parsed_url.scheme == "emfs":
return open(parsed_url.path, "rb")
if parsed_url.scheme == "file":
- result_bytes = (await loadBinaryFile("", parsed_url.path)).to_bytes()
+ result_bytes = (await loadBinaryFile(parsed_url.path)).to_bytes()
else:
result_bytes = await (await pyfetch(url, **kwargs)).bytes()
return BytesIO(result_bytes)
| diff --git a/src/tests/test_package_loading.py b/src/tests/test_package_loading.py
--- a/src/tests/test_package_loading.py
+++ b/src/tests/test_package_loading.py
@@ -2,15 +2,18 @@
from pathlib import Path
import pytest
+from pytest_pyodide.fixture import selenium_common
+from pytest_pyodide.server import spawn_web_server
+from pytest_pyodide.utils import parse_driver_timeout, set_webdriver_script_timeout
-from conftest import DIST_PATH
+from conftest import DIST_PATH, ROOT_PATH
-def get_pyparsing_wheel_name():
+def get_pyparsing_wheel_name() -> str:
return list(DIST_PATH.glob("pyparsing*.whl"))[0].name
-def get_pytz_wheel_name():
+def get_pytz_wheel_name() -> str:
return list(DIST_PATH.glob("pytz*.whl"))[0].name
@@ -58,10 +61,33 @@ def test_load_from_url(selenium_standalone, web_server_secondary, active_server)
selenium.run("import pytz")
-def test_load_relative_url(selenium_standalone):
- print(get_pytz_wheel_name())
- selenium_standalone.load_package(f"./{get_pytz_wheel_name()}")
- selenium_standalone.run("import pytz")
+def test_load_relative_url(
+ request, runtime, web_server_main, playwright_browsers, tmp_path
+):
+ url, port, _ = web_server_main
+ test_html = (ROOT_PATH / "src/templates/test.html").read_text()
+ test_html = test_html.replace("./pyodide.js", f"http://{url}:{port}/pyodide.js")
+ (tmp_path / "test.html").write_text(test_html)
+ pytz_wheel = get_pytz_wheel_name()
+ pytz1_wheel = pytz_wheel.replace("pytz", "pytz1")
+ shutil.copy(DIST_PATH / pytz_wheel, tmp_path / pytz1_wheel)
+
+ with spawn_web_server(tmp_path) as web_server, selenium_common(
+ request,
+ runtime,
+ web_server,
+ load_pyodide=True,
+ browsers=playwright_browsers,
+ script_type="classic",
+ ) as selenium, set_webdriver_script_timeout(
+ selenium, script_timeout=parse_driver_timeout(request.node)
+ ):
+ if selenium.browser == "node":
+ selenium.run_js(f"process.chdir('{tmp_path.resolve()}')")
+ selenium.load_package(pytz1_wheel)
+ selenium.run(
+ "import pytz; from pyodide_js import loadedPackages; print(loadedPackages.pytz1)"
+ )
def test_list_loaded_urls(selenium_standalone):
| Relative URLs in pyodide.loadPackage
## π Bug
<!-- A clear and concise description of what the bug is. -->
The documentation states that [pyodide.loadPackage](https://pyodide.org/en/stable/usage/api/js-api.html#pyodide.loadPackage) supports relative URLs. I'm trying to load an out-of-tree wheel from my local webserver, but this doesn't seem to work out well.
### To Reproduce
<!-- Minimal code example to reproduce the bug. -->
```js
await pyodide.loadPackage("dist/igraph-0.9.11-cp310-cp310-emscripten_3_1_14_wasm32.whl");
```
or
```js
await pyodide.loadPackage("./dist/igraph-0.9.11-cp310-cp310-emscripten_3_1_14_wasm32.whl");
```
Pyodide tries to load the wheel from `https://cdn.jsdelivr.net/pyodide/v0.21.1/full/dist/igraph-0.9.11-cp310-cp310-emscripten_3_1_14_wasm32.whl`.
### Expected behavior
<!-- FILL IN -->
Load the wheel from the relative URL.
### Environment
- Pyodide Version<!-- (e.g. 1.8.1) -->: 0.21.1
- Browser version<!-- (e.g. Chrome 95.0.4638.54) -->: Firefox ESR 91.12.0, Chromium 104.0.5112.101
- Any other relevant information:
<!-- If you are building Pyodide by yourself, please also include these information: -->
<!--
- Commit hash of Pyodide git repository:
- Build environment<!--(e.g. Ubuntu 18.04, pyodide/pyodide-env:19 docker)- ->:
-->
### Additional context
<!-- Add any other context about the problem here. -->
| Looking into this, I can reproduce it. It's not clear that it ever worked. I'm trying to fix our test for it first.
The problem is in this function:
https://github.com/pyodide/pyodide/blob/main/src/js/compat.ts#L125 | 2022-08-24T19:20:27 |
pyodide/pyodide | 3,020 | pyodide__pyodide-3020 | [
"2960"
] | 0df5f790952d6cf413ef99493c27d1765a1ea8c0 | diff --git a/pyodide-build/pyodide_build/buildall.py b/pyodide-build/pyodide_build/buildall.py
--- a/pyodide-build/pyodide_build/buildall.py
+++ b/pyodide-build/pyodide_build/buildall.py
@@ -522,11 +522,6 @@ def generate_packagedata(
}
packages[name.lower() + "-tests"] = pkg_entry
- # Workaround for circular dependency between soupsieve and beautifulsoup4
- # TODO: FIXME!!
- if "soupsieve" in packages:
- packages["soupsieve"]["depends"].append("beautifulsoup4")
-
# sort packages by name
packages = dict(sorted(packages.items()))
return packages
diff --git a/pyodide-build/pyodide_build/common.py b/pyodide-build/pyodide_build/common.py
--- a/pyodide-build/pyodide_build/common.py
+++ b/pyodide-build/pyodide_build/common.py
@@ -163,10 +163,6 @@ def _parse_package_subset(query: str | None) -> set[str]:
packages |= CORE_PACKAGES | CORE_SCIPY_PACKAGES
packages.discard("min-scipy-stack")
- # Hack to deal with the circular dependence between soupsieve and
- # beautifulsoup4
- if "beautifulsoup4" in packages:
- packages.add("soupsieve")
packages.discard("")
return packages
| Importing reboundx fails occasionally
## π Bug
Importing reboundx fails occasionally in the stable/latest branch.
The reason for the failure is that we download and install packages simultaneously.
(To be more precise, we install shared libraries first then install normal wheel packages.)
https://github.com/pyodide/pyodide/blob/b497ce26ed94774a8d695a3e61016a7a26e45e3b/src/js/load-package.ts#L436-L448
The problem is that `reboundx` is dependent on another non-shared library: `rebound`. When loading `reboundx`, it loads `libreboundx.so` which load `librebound.so` which is in `rebound` package. So if `reboundx` is downloaded and installed "before" `rebound`, it fails to load because there is no `librebound.so` in the virtual file system.
I think the `reboundx` needs to bundle `librebound.so` inside its wheel. However, we can try to fix this issue on Pyodide side.
The possible solution can be:
1. When `pyodide.loadPackage()` is called, load dependencies first, then load the requested package.
2. Separate out downloading and dynlib loading process, so downloading all packages happens first, then loading dynlibs happens next.
Personally, I think option 2 is better since if we use option 1, it might be tough to handle cases like `pyodide.loadPyodide(["reboundx", "rebound"])`. The downside of option 2 is that there is a potential slowdown in loading packages.
### To Reproduce
```
await loadPackage("reboundx")
```

### Environment
- Pyodide Version: 0.21.0
cc: @hannorein
| Maybe a mixed method would work: try to load straight away, but if a dependency is missing throw an error and try again after all packages are downloaded?
Is this just an error message in the console or does the package not work? Is it possible the package itself works fine despite the error message?
Here's a short test to see if it works:
```
import rebound, reboundx
sim = rebound.Simulation()
sim.add(m=1.)
sim.add(m=3e-3,a=4)
sim.add(m=5e-3,a=10)
rebx = reboundx.Extras(sim)
mof = rebx.load_force(βmodify_orbits_forces")
rebx.add_force(mof)
sim.particles[2].params["tau_a"] = -200000
sim.particles[2].params["tau_e"] = -400000
sim.integrate(1e5)
```
> Maybe a mixed method would work: try to load straight away, but if a dependency is missing throw an error and try again after all packages are downloaded?
Sound good. Open a PR with retrying approach #2963.
> Is this just an error message in the console or does the package not work? Is it possible the package itself works fine despite the error message?

This is what I get in the console.
(I still cannot reproduce the issue.)
In reboundx's `__init__.py` there is an `import rebound` statement, but after the line that throws the above error. Would it make sense to just move the import statement further up? Then rebound would have to be installed by the time the library is used.
> In reboundx's `__init__.py` there is an `import rebound` statement, but after the line that throws the above error. Would it make sense to just move the import statement further up? Then rebound would have to be installed by the time the library is used.
Well, it is not related to the import order. The problem happens because `libreboundx.so` (inside reboundx) depends on `librebound.so` (inside rebound) but we download them simultaneously in a random order so if `libreboundx.so` is downloaded and loaded before `librebound.so`, it will fail.
Hopefully, #2963 would fix it.
But if reboundx imports rebound (which loads `librebound.so`) before it loads `libreboundx.so` shouldn't this solve the problem?
Nope. The problem occurs in the loading system before any python code is executed. `a.so` contains metadata saying it depends on `b.so` so when `a.so` is loaded it needs to see the symbols from `b.so`. if `b.so` isn't around yet, an error is raised due to missing symbols.
In particular, the issue is that wasm cannot be loaded synchronously, so everything needs to be loaded ahead of time.
Oh. Ok. Thanks. That was the part I was missing.. | 2022-08-26T11:07:06 |
|
pyodide/pyodide | 3,028 | pyodide__pyodide-3028 | [
"3011",
"2764"
] | 9ef3cac19827f0215aaf1fa6161a773551d40728 | diff --git a/packages/micropip/src/micropip/_compat_in_pyodide.py b/packages/micropip/src/micropip/_compat_in_pyodide.py
--- a/packages/micropip/src/micropip/_compat_in_pyodide.py
+++ b/packages/micropip/src/micropip/_compat_in_pyodide.py
@@ -23,7 +23,7 @@ async def fetch_bytes(url: str, kwargs: dict[str, str]) -> IO[bytes]:
if parsed_url.scheme == "emfs":
return open(parsed_url.path, "rb")
if parsed_url.scheme == "file":
- result_bytes = (await loadBinaryFile("", parsed_url.path)).to_bytes()
+ result_bytes = (await loadBinaryFile(parsed_url.path)).to_bytes()
else:
result_bytes = await (await pyfetch(url, **kwargs)).bytes()
return BytesIO(result_bytes)
diff --git a/pyodide-build/pyodide_build/common.py b/pyodide-build/pyodide_build/common.py
--- a/pyodide-build/pyodide_build/common.py
+++ b/pyodide-build/pyodide_build/common.py
@@ -109,8 +109,8 @@ def find_matching_wheels(wheel_paths: Iterable[Path]) -> Iterator[Path]:
"fpcast-test",
"sharedlib-test-py",
"cpp-exceptions-test",
- "_ssl",
- "_lzma",
+ "ssl",
+ "lzma",
"pytest",
"tblib",
}
diff --git a/src/py/_pyodide/_importhook.py b/src/py/_pyodide/_importhook.py
--- a/src/py/_pyodide/_importhook.py
+++ b/src/py/_pyodide/_importhook.py
@@ -152,7 +152,12 @@ def __init__(self) -> None:
self.stdlibs = sys.stdlib_module_names | {"test"}
# TODO: put list of unvendored stdlibs to somewhere else?
- self.unvendored_stdlibs = {"distutils", "test", "_ssl", "_lzma"} & self.stdlibs
+ self.unvendored_stdlibs = {
+ "distutils",
+ "test",
+ "ssl",
+ "lzma",
+ } & self.stdlibs
def find_spec(
self,
| diff --git a/packages/_lzma/test_lzma.py b/packages/lzma/test_lzma.py
similarity index 85%
rename from packages/_lzma/test_lzma.py
rename to packages/lzma/test_lzma.py
--- a/packages/_lzma/test_lzma.py
+++ b/packages/lzma/test_lzma.py
@@ -1,7 +1,7 @@
from pytest_pyodide import run_in_pyodide
-@run_in_pyodide(packages=["test", "_lzma"], pytest_assert_rewrites=False)
+@run_in_pyodide(packages=["test", "lzma"], pytest_assert_rewrites=False)
def test_lzma(selenium):
# TODO: libregrtest.main(["test_lzma"]) doesn't collect any tests for some unknown reason.
diff --git a/packages/_ssl/test_ssl.py b/packages/ssl/test_ssl.py
similarity index 89%
rename from packages/_ssl/test_ssl.py
rename to packages/ssl/test_ssl.py
--- a/packages/_ssl/test_ssl.py
+++ b/packages/ssl/test_ssl.py
@@ -1,7 +1,7 @@
from pytest_pyodide import run_in_pyodide
-@run_in_pyodide(packages=["test", "_ssl"], pytest_assert_rewrites=False)
+@run_in_pyodide(packages=["test", "ssl"], pytest_assert_rewrites=False)
def test_ssl(selenium):
import platform
import unittest
@@ -17,6 +17,7 @@ def test_ssl(selenium):
"test_verify_flags",
"test_subclass",
"test_lib_reason",
+ "test_unwrap",
]
try:
diff --git a/src/tests/test_package_loading.py b/src/tests/test_package_loading.py
--- a/src/tests/test_package_loading.py
+++ b/src/tests/test_package_loading.py
@@ -2,15 +2,18 @@
from pathlib import Path
import pytest
+from pytest_pyodide.fixture import selenium_common
+from pytest_pyodide.server import spawn_web_server
+from pytest_pyodide.utils import parse_driver_timeout, set_webdriver_script_timeout
-from conftest import DIST_PATH
+from conftest import DIST_PATH, ROOT_PATH
-def get_pyparsing_wheel_name():
+def get_pyparsing_wheel_name() -> str:
return list(DIST_PATH.glob("pyparsing*.whl"))[0].name
-def get_pytz_wheel_name():
+def get_pytz_wheel_name() -> str:
return list(DIST_PATH.glob("pytz*.whl"))[0].name
@@ -58,10 +61,33 @@ def test_load_from_url(selenium_standalone, web_server_secondary, active_server)
selenium.run("import pytz")
-def test_load_relative_url(selenium_standalone):
- print(get_pytz_wheel_name())
- selenium_standalone.load_package(f"./{get_pytz_wheel_name()}")
- selenium_standalone.run("import pytz")
+def test_load_relative_url(
+ request, runtime, web_server_main, playwright_browsers, tmp_path
+):
+ url, port, _ = web_server_main
+ test_html = (ROOT_PATH / "src/templates/test.html").read_text()
+ test_html = test_html.replace("./pyodide.js", f"http://{url}:{port}/pyodide.js")
+ (tmp_path / "test.html").write_text(test_html)
+ pytz_wheel = get_pytz_wheel_name()
+ pytz1_wheel = pytz_wheel.replace("pytz", "pytz1")
+ shutil.copy(DIST_PATH / pytz_wheel, tmp_path / pytz1_wheel)
+
+ with spawn_web_server(tmp_path) as web_server, selenium_common(
+ request,
+ runtime,
+ web_server,
+ load_pyodide=True,
+ browsers=playwright_browsers,
+ script_type="classic",
+ ) as selenium, set_webdriver_script_timeout(
+ selenium, script_timeout=parse_driver_timeout(request.node)
+ ):
+ if selenium.browser == "node":
+ selenium.run_js(f"process.chdir('{tmp_path.resolve()}')")
+ selenium.load_package(pytz1_wheel)
+ selenium.run(
+ "import pytz; from pyodide_js import loadedPackages; print(loadedPackages.pytz1)"
+ )
def test_list_loaded_urls(selenium_standalone):
diff --git a/src/tests/test_pyodide.py b/src/tests/test_pyodide.py
--- a/src/tests/test_pyodide.py
+++ b/src/tests/test_pyodide.py
@@ -1,11 +1,13 @@
import re
from collections.abc import Sequence
+from pathlib import Path
from textwrap import dedent
from typing import Any
import pytest
from pytest_pyodide import run_in_pyodide
+from conftest import ROOT_PATH
from pyodide.code import CodeRunner, eval_code, find_imports, should_quiet # noqa: E402
@@ -1197,7 +1199,7 @@ def test_unvendored_stdlib(selenium_standalone):
import pytest
- unvendored_stdlibs = ["test", "_ssl", "_lzma"]
+ unvendored_stdlibs = ["test", "ssl", "lzma"]
removed_stdlibs = ["pwd", "turtle", "tkinter"]
for lib in unvendored_stdlibs:
@@ -1234,3 +1236,59 @@ def test_unvendored_stdlib(selenium_standalone):
ModuleNotFoundError, match="removed from the Python standard library"
):
finder.find_spec(lib, None)
+
+
[email protected]_browsers(chrome="Node only", firefox="Node only")
+def test_relative_index_url(selenium, tmp_path):
+ tmp_dir = Path(tmp_path)
+ import subprocess
+
+ version_result = subprocess.run(
+ ["node", "-v"], capture_output=True, encoding="utf8"
+ )
+ extra_node_args = []
+ if version_result.stdout.startswith("v14"):
+ extra_node_args.append("--experimental-wasm-bigint")
+
+ import shutil
+
+ shutil.copy(ROOT_PATH / "dist/pyodide.js", tmp_dir / "pyodide.js")
+ shutil.copytree(ROOT_PATH / "dist/node_modules", tmp_dir / "node_modules")
+
+ result = subprocess.run(
+ [
+ "node",
+ *extra_node_args,
+ "-e",
+ rf"""
+ const loadPyodide = require("{tmp_dir / "pyodide.js"}").loadPyodide;
+ async function main(){{
+ py = await loadPyodide({{indexURL: "./dist"}});
+ console.log("\n");
+ console.log(py._module.API.config.indexURL);
+ }}
+ main();
+ """,
+ ],
+ cwd=ROOT_PATH,
+ capture_output=True,
+ encoding="utf8",
+ )
+ import textwrap
+
+ def print_result(result):
+ if result.stdout:
+ print(" stdout:")
+ print(textwrap.indent(result.stdout, " "))
+ if result.stderr:
+ print(" stderr:")
+ print(textwrap.indent(result.stderr, " "))
+
+ if result.returncode:
+ print_result(result)
+ result.check_returncode()
+
+ try:
+ assert result.stdout.strip().split("\n")[-1] == str(ROOT_PATH / "dist") + "/"
+ finally:
+ print_result(result)
| Relative URLs in pyodide.loadPackage
## π Bug
<!-- A clear and concise description of what the bug is. -->
The documentation states that [pyodide.loadPackage](https://pyodide.org/en/stable/usage/api/js-api.html#pyodide.loadPackage) supports relative URLs. I'm trying to load an out-of-tree wheel from my local webserver, but this doesn't seem to work out well.
### To Reproduce
<!-- Minimal code example to reproduce the bug. -->
```js
await pyodide.loadPackage("dist/igraph-0.9.11-cp310-cp310-emscripten_3_1_14_wasm32.whl");
```
or
```js
await pyodide.loadPackage("./dist/igraph-0.9.11-cp310-cp310-emscripten_3_1_14_wasm32.whl");
```
Pyodide tries to load the wheel from `https://cdn.jsdelivr.net/pyodide/v0.21.1/full/dist/igraph-0.9.11-cp310-cp310-emscripten_3_1_14_wasm32.whl`.
### Expected behavior
<!-- FILL IN -->
Load the wheel from the relative URL.
### Environment
- Pyodide Version<!-- (e.g. 1.8.1) -->: 0.21.1
- Browser version<!-- (e.g. Chrome 95.0.4638.54) -->: Firefox ESR 91.12.0, Chromium 104.0.5112.101
- Any other relevant information:
<!-- If you are building Pyodide by yourself, please also include these information: -->
<!--
- Commit hash of Pyodide git repository:
- Build environment<!--(e.g. Ubuntu 18.04, pyodide/pyodide-env:19 docker)- ->:
-->
### Additional context
<!-- Add any other context about the problem here. -->
Pyodide crashes when loaded in a Jest Unittest.
## π Bug
I tried to write unittests for some code utilizing pyodide and vue. I did run into a whole set of errors until I failed to resolve `TextEncoder is not defined` while trying to load pyodide.
Summary:
* pyodide with `node` works like a charm.
* jest only starts with node option `--experimental-vm-modules`
* with jest `indexURL` has to be defined, otherwise the result is a `Error: Cannot find module '.../pyodide-test/node_modules/src/js/pyodide.asm.js' from 'node_modules/pyodide/pyodide.js'`
* `pyodide.asm.js` crashes with `Error: thrown: bad input to processPackageDataError:` unless modified (see 'To Reproduce')
* (currently unresolved) pyodide crashes with `ReferenceError: TextEncoder is not defined`
If anyone could hint me how to resolve the current showstopper I am ready to look into any followup issue (even if I am not sure if I could resolve it).
### To Reproduce
Enable vue testing with jest via
```
module.exports = {
preset: '@vue/cli-plugin-unit-jest'
}
```
Modify `pyodide.asm.js` by disabling
`assert(arrayBuffer instanceof ArrayBuffer, "bad input to processPackageData");` in `function processPackageData(arrayBuffer)`, because the `arrayBuffer instanceof ArrayBuffer` is `false` even if `arrayBuffer.constructor.name == "ArrayBuffer"` is `true` (or one could change the assertion to work in this case, but the file is auto-generated anyway...).
Insert you local path in the code below as `indexURL`
```
describe('pyodide', () => {
let pyodide = null;
beforeAll(async () => {
let pyodide_pkg = await require("pyodide/pyodide.js");
pyodide = await pyodide_pkg.loadPyodide({
indexURL: "/this/is/your/path/to/the/local/project/node_modules/pyodide/",
})
})
it('can load', () => {
expect(pyodide).not.toBe(null)
});
})
```
run `node --experimental-vm-modules node_modules/jest/bin/jest.js`
### Expected behavior
Test successful.
### Observed behavior
Test fails with error message
```
TextEncoder is not defined
ReferenceError: TextEncoder is not defined
at createStdinWrapper (/home/sebastian/workspace/pyodide_bug/pyodide-test/node_modules/src/js/module.ts:63:19)
at /home/sebastian/workspace/pyodide_bug/pyodide-test/node_modules/src/js/module.ts:54:22
at callRuntimeCallbacks (/home/sebastian/workspace/pyodide_bug/pyodide-test/node_modules/pyodide/pyodide.asm.js:9796:25)
at preRun (/home/sebastian/workspace/pyodide_bug/pyodide-test/node_modules/pyodide/pyodide.asm.js:6586:17)
at run (/home/sebastian/workspace/pyodide_bug/pyodide-test/node_modules/pyodide/pyodide.asm.js:56432:17)
at runCaller (/home/sebastian/workspace/pyodide_bug/pyodide-test/node_modules/pyodide/pyodide.asm.js:56385:33)
at removeRunDependency (/home/sebastian/workspace/pyodide_bug/pyodide-test/node_modules/pyodide/pyodide.asm.js:6664:25)
at receiveInstance (/home/sebastian/workspace/pyodide_bug/pyodide-test/node_modules/pyodide/pyodide.asm.js:6762:21)
at receiveInstantiationResult (/home/sebastian/workspace/pyodide_bug/pyodide-test/node_modules/pyodide/pyodide.asm.js:6768:21)
```
### Environment
node: v16.13.1
pyodide: 0.20.1-alpha.2
`package.json`:
```
{
"name": "pyodide-test",
"version": "0.1.0",
"private": true,
"scripts": {
"serve": "vue-cli-service serve",
"build": "vue-cli-service build",
"lint": "vue-cli-service lint",
"test": "jest",
"offline": "node src/offline.js"
},
"dependencies": {
"@vue/cli-plugin-unit-jest": "^5.0.4",
"@vue/test-utils": "^2.0.0",
"core-js": "^3.6.5",
"pyodide": "^0.20.1-alpha.2",
"vue": "^3.0.0"
},
"devDependencies": {
"@vue/cli-plugin-babel": "~4.5.13",
"@vue/cli-plugin-eslint": "~4.5.13",
"@vue/cli-service": "~4.5.13",
"@vue/compiler-sfc": "^3.0.0",
"@vue/vue3-jest": "^27.0.0",
"babel-eslint": "^10.1.0",
"eslint": "^6.7.2",
"eslint-plugin-vue": "^7.0.0"
},
"eslintConfig": {
"root": true,
"env": {
"node": true
},
"extends": [
"plugin:vue/vue3-essential",
"eslint:recommended"
],
"parserOptions": {
"parser": "babel-eslint"
},
"rules": {}
},
"browserslist": [
"> 1%",
"last 2 versions",
"not dead"
]
}
```
### Additional context
Nice piece of software.
I made a test repository in https://github.com/msebas/pyodide-test to simplify reproducing the problem above.
|
Looks like there is a solution here:
https://stackoverflow.com/questions/68468203/why-am-i-getting-textencoder-is-not-defined-in-jest
Hello and thanks.
Seems like this does resolve the problem. Now the test finishes without issues. Sadly I do not know how to modify the build configuration to automate this process, so I close this issue.
~~On top it seems like when now using a new package install out of the bat the only problem remaining is defining IndexURL. It seems like a dependent package was causing the `processPackageDataError` and this seems to be updated now. As a result I could not reproduce the error in any new installation.~~
Kind regards
I made a mistake. The error does NOT occur with the given setup. I edited the setup and made a test repository in https://github.com/msebas/pyodide-test to simplify reproduction.
The problem does only occur if jest is configured to use
```
module.exports = {
preset: '@vue/cli-plugin-unit-jest'
}
```
I do not know if this is something that should work, but from my point of view this reopens the problem itself. | 2022-08-28T19:02:06 |
pyodide/pyodide | 3,074 | pyodide__pyodide-3074 | [
"3068"
] | c53e229175e41c1cd3c81e9bb5014f493ee3f27e | diff --git a/tools/bump_version.py b/tools/bump_version.py
--- a/tools/bump_version.py
+++ b/tools/bump_version.py
@@ -64,6 +64,16 @@ def build_version_pattern(pattern):
build_version_pattern(r"version\s*=\s*{{{python_version}}}"),
prerelease=False,
),
+ Target(
+ ROOT / "src/js/version.ts",
+ build_version_pattern('version: string = "{python_version}"'),
+ prerelease=True,
+ ),
+ Target(
+ ROOT / "src/core/pre.js",
+ build_version_pattern('API.version = "{python_version}"'),
+ prerelease=True,
+ ),
]
JS_TARGETS = [
| diff --git a/src/tests/test_pyodide.py b/src/tests/test_pyodide.py
--- a/src/tests/test_pyodide.py
+++ b/src/tests/test_pyodide.py
@@ -1126,6 +1126,24 @@ def test_home_directory(selenium_standalone_noload):
)
+def test_version_variable(selenium):
+ js_version = selenium.run_js(
+ """
+ return pyodide.version
+ """
+ )
+
+ core_version = selenium.run_js(
+ """
+ return pyodide._api.version
+ """
+ )
+
+ from pyodide import __version__ as py_version
+
+ assert js_version == py_version == core_version
+
+
def test_sys_path0(selenium):
selenium.run_js(
"""
| Export version from pyodide JS module
## π Feature
In `pyodide.d.ts` I see `declare let version: string;` but it's not exported. It'd be great if it was exported so that I can do this:
```ts
import {version} from "pyodide";
```
### Motivation
I have [some code](https://github.com/alexmojaki/pyodide-worker-runner/blob/e7dd3d0ee1dff457bf9d6104944477840a83e5a7/lib/index.ts#L16) that roughly looks like this:
```ts
import {loadPyodide, PyodideInterface} from "pyodide";
const version = "0.21.1";
const indexURL = `https://cdn.jsdelivr.net/pyodide/v${version}/full/`;
const pyodide: PyodideInterface = await loadPyodide({indexURL});
if (pyodide.version !== version) {
throw new Error(
`loadPyodide loaded version ${pyodide.version} instead of ${version}`,
);
}
```
I'd like to import `version` instead of setting it manually, so that it always automatically matches whatever version of pyodide is installed.
| Thanks for the suggestion @alexmojaki!
https://github.com/pyodide/pyodide/blob/3e2d7f29a5b242882f2b74bf89319220815136b1/src/js/pyodide.ts#L147
It seems like we are setting the version string of the JS package dynamically during the `loadPyodide` using the version of the Pyodide Python package.
Probably we should have a static version string for JS as you suggested, and maybe we should reject if the requested Pyodide Python package version mismatches JS package version. | 2022-09-06T06:15:19 |
pyodide/pyodide | 3,082 | pyodide__pyodide-3082 | [
"2991"
] | c53e229175e41c1cd3c81e9bb5014f493ee3f27e | diff --git a/pyodide-build/pyodide_build/pywasmcross.py b/pyodide-build/pyodide_build/pywasmcross.py
--- a/pyodide-build/pyodide_build/pywasmcross.py
+++ b/pyodide-build/pyodide_build/pywasmcross.py
@@ -17,7 +17,7 @@
INVOKED_PATH = Path(INVOKED_PATH_STR)
-SYMLINKS = {"cc", "c++", "ld", "ar", "gcc", "gfortran", "cargo"}
+SYMLINKS = {"cc", "c++", "ld", "ar", "gcc", "ranlib", "strip", "gfortran", "cargo"}
IS_COMPILER_INVOCATION = INVOKED_PATH.name in SYMLINKS
if IS_COMPILER_INVOCATION:
@@ -547,6 +547,12 @@ def handle_command_generate_args(
# distutils doesn't use the c++ compiler when compiling c++ <sigh>
if any(arg.endswith((".cpp", ".cc")) for arg in line):
new_args = ["em++"]
+ elif cmd == "ranlib":
+ line[0] = "emranlib"
+ return line
+ elif cmd == "strip":
+ line[0] = "emstrip"
+ return line
else:
return line
| diff --git a/pyodide-build/pyodide_build/tests/test_pywasmcross.py b/pyodide-build/pyodide_build/tests/test_pywasmcross.py
--- a/pyodide-build/pyodide_build/tests/test_pywasmcross.py
+++ b/pyodide-build/pyodide_build/tests/test_pywasmcross.py
@@ -45,13 +45,16 @@ def _inner(line, *pargs):
def generate_args(line: str, args: Any, is_link_cmd: bool = False) -> str:
splitline = line.split()
res = handle_command_generate_args(splitline, args, is_link_cmd)
- for arg in [
- "-Werror=implicit-function-declaration",
- "-Werror=mismatched-parameter-types",
- "-Werror=return-type",
- ]:
- assert arg in res
- res.remove(arg)
+
+ if res[0] in ("emcc", "em++"):
+ for arg in [
+ "-Werror=implicit-function-declaration",
+ "-Werror=mismatched-parameter-types",
+ "-Werror=return-type",
+ ]:
+ assert arg in res
+ res.remove(arg)
+
if "-c" in splitline:
include_index = res.index("python/include")
del res[include_index]
@@ -70,7 +73,20 @@ def test_handle_command():
"echo",
"wasm32-emscripten",
]
- assert generate_args("gcc test.c", args) == "emcc test.c"
+
+ proxied_commands = {
+ "cc": "emcc",
+ "c++": "em++",
+ "gcc": "emcc",
+ "ld": "emcc",
+ "ar": "emar",
+ "ranlib": "emranlib",
+ "strip": "emstrip",
+ }
+
+ for cmd, proxied_cmd in proxied_commands.items():
+ assert generate_args(cmd, args) == proxied_cmd
+
assert (
generate_args("gcc -c test.o -o test.so", args, True)
== "emcc -c test.o -o test.so"
| `pywasmcross` does not proxy `strip`
## π Bug
Calls to the `strip` executable invoke the GNU implementation rather than `emstrip`
## Proposed Fix
Shim `strip` in addition to `c++` etc.
### Environment
- Pyodide Version<!-- (e.g. 1.8.1) -->: 0.21.0
- Browser version<!-- (e.g. Chrome 95.0.4638.54) -->: NA
- Any other relevant information:
<!-- If you are building Pyodide by yourself, please also include these information: -->
<!--
- Commit hash of Pyodide git repository:
- Build environment<!--(e.g. Ubuntu 18.04, pyodide/pyodide-env:19 docker)- ->:
-->
| This is probably a reasonable idea. Could you provide a bit more information on what you were doing when this problem came up?
Hi @hoodmane! Apologies for not including a reproducer. I'm working on generating a WASM module for https://github.com/scikit-hep/awkward/, and I've found that symbols are being stripped as part of the build process, which I think is coming from pybind11's `pybind11_add_module`. | 2022-09-08T02:09:15 |
pyodide/pyodide | 3,136 | pyodide__pyodide-3136 | [
"3135"
] | 1a66912341e12f8538f82acbe5e4f06188236115 | diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -11,6 +11,8 @@
from typing import Any
from unittest import mock
+panels_add_bootstrap_css = False
+
# -- Project information -----------------------------------------------------
project = "Pyodide"
| The content area in the docs is too narrow
## π Documentation
In the documentation strings, rendered code examples only fit 63 characters of width. It would be nice if we could make the content area a bit larger so that code examples fit at least 80 characters. On my screen, the content area is exactly the middle third of the screen, with the left and right thirds devoted to menus.
| Sounds good to me. It looks like we had wider area in [0.18.0](https://pyodide.org/en/0.18.0/usage/quickstart.html).
Yes that's way better. I wonder what changed.
You might need to add
```
panels_add_bootstrap_css= False
```
to the ``conf.py`` file? See
https://sphinx-panels.readthedocs.io/en/latest/#sphinx-configuration
and
https://github.com/executablebooks/sphinx-book-theme/issues/194#issuecomment-697028667
``sphinx_panels`` was added in #1960, so that seems consistent with when the width changed... I can't test this locally though, because I can't easily get the docs to build.
> because I can't easily get the docs to build.
Maybe that should be a separate issue. I agree that building the docs locally is a bit of a pain.
> add `panels_add_bootstrap_css= False`
This fixes the problem, thanks @jobovy! | 2022-09-20T04:35:07 |
|
pyodide/pyodide | 3,264 | pyodide__pyodide-3264 | [
"3167",
"3183"
] | a1e43afbc017f36bc4329e882e6aa28dd8dc4699 | diff --git a/pyodide-build/pyodide_build/buildpkg.py b/pyodide-build/pyodide_build/buildpkg.py
--- a/pyodide-build/pyodide_build/buildpkg.py
+++ b/pyodide-build/pyodide_build/buildpkg.py
@@ -566,11 +566,6 @@ def package_wheel(
# to maximize sanity.
replace_so_abi_tags(wheel_dir)
- vendor_sharedlib = build_metadata.vendor_sharedlib
- if vendor_sharedlib:
- lib_dir = Path(common.get_make_flag("WASM_LIBRARY_DIR"))
- copy_sharedlibs(wheel, wheel_dir, lib_dir)
-
post = build_metadata.post
if post:
print("Running post script in ", str(Path.cwd().absolute()))
@@ -580,6 +575,11 @@ def package_wheel(
print("ERROR: post failed")
exit_with_stdio(result)
+ vendor_sharedlib = build_metadata.vendor_sharedlib
+ if vendor_sharedlib:
+ lib_dir = Path(common.get_make_flag("WASM_LIBRARY_DIR"))
+ copy_sharedlibs(wheel, wheel_dir, lib_dir)
+
python_dir = f"python{sys.version_info.major}.{sys.version_info.minor}"
host_site_packages = Path(host_install_dir) / f"lib/{python_dir}/site-packages"
if build_metadata.cross_build_env:
| Embed external shared libraries into the wheel
## Problem statement
Normally, native linux Python wheels include its dependent external shared libraries. It is done by [`auditwheel`](https://github.com/pypa/auditwheel) which copies shared libraries into the wheel and repairs the RPATH of elf binaries.
On the other hand, we are serving external shared libraries separately from the Python wheel. This is a bit annoying since it makes wheels not portable themselves. One other partially related problem is that if the wheel contains shared libraries inside its wheel, it is not working well because we have to load external shared libraries globally (unless we manage to make #3054 work) but the libraries inside the wheel are not loaded globally.
## Proposal
I think it would be nice if we can embed shared libraries into the wheels so wheels can be distributed and be used without downloading additional shared libraries.
For this, I have been recently working on a tool called [`auditwheel-emscripten`](https://github.com/ryanking13/auditwheel-emscripten) which does jobs similar to auditwheel but for Emscripten / Pyodide wheels: It checks the external libraries that Emscripten WASM module depends and can copy them into the wheel file.
## Implementation issues
One issue is that there is no such thing as RPATH in Emscripten WASM modules. [They only contain the name of the shared library](https://github.com/WebAssembly/tool-conventions/blob/main/DynamicLinking.md). So we will need to manually handle how the WASM module should find its dependent libraries. But [since we are doing some heuristic path resolution already](https://github.com/pyodide/pyodide/blob/6214bafe5dacae7190781bb6a0091de867aa35db/src/js/load-package.ts#L374-L397), I believe it is not a big deal.
One other issue is how we should distinguish Python c-extension modules and shared libraries. We need to load shared libraries globally (for now), but we want to load Python extension modules locally. So we need some ways to distinguish them.
A naive idea that I am thinking of is using the prefix and suffix of the filename: Checking if the filename startswith `lib` or endswith an extension suffix (`cpython-310-wasm32-emscripten`). But it would be nice if there is a more proper way.
Note that loading a python c-extension module globally will not cause an error (unless there is a symbol collision), but loading a shared library locally will raise an error.
BLD Add option to embed shared libraries into the wheel
### Description
This is an implementation of #3167. It works, but it is still work-in-progress as this change adds load time overhead of checking if a shared library needs to be loaded locally or globally. I am looking for a better and faster way.
- Adds a new key in meta.yaml: `vendor-sharedlib`.
- If it is set to true, `auditwheel-emscripten` will find shared libraries used by the wheel and copy them into the wheel.
- In this PR, I only applied this key to `h5py` and `shapely`. Other packages are not affected.
### Checklists
- [ ] Add a [CHANGELOG](https://github.com/pyodide/pyodide/blob/main/docs/project/changelog.md) entry
- [ ] Add / update tests
- [ ] Add new / update outdated documentation
| Thanks for starting this discussion! This is indeed the Python way and probably something we really have to do for packages built out of tree. So having auditwheel-emscripten is indeed very valuable.
For packages we built in tree, I'm less sure, as this would lead to potentially redundant downloads (e.g. BLAS downloaded multiple times for numpy and scipy once we switch to a better BLAS). Though it would be worth analysing how many dependent packages each shared library has in the current dependency graph, and whether indeed vendoring it inside the wheel would be that significant.
If we only have wheels and no standalone shared dependencies, this would also make hosting easier #3049 as it means we can more easily reuse standard Python tooling.
> A naive idea that I am thinking of is using the prefix and suffix of the filename: Checking if the filename startswith lib or endswith an extension suffix (cpython-310-wasm32-emscripten). But it would be nice if there is a more proper way.
Maybe listing symbols and checking if there is anything matching `PyInit_*`?
> One other issue is how we should distinguish Python c-extension modules and shared libraries. We need to load shared libraries globally (for now), but we want to load Python extension modules locally. So we need some ways to distinguish them. A naive idea that I am thinking of is using the prefix and suffix of the filename: Checking if the filename startswith `lib` or endswith an extension suffix (`cpython-310-wasm32-emscripten`).
Isn't an issue with depending on the name that sometimes packages will depend on the extension modules of other packages, e.g., how `reboundx` depends on `rebound`'s (#2909)?
> For packages we built in tree, I'm less sure, as this would lead to potentially redundant downloads (e.g. BLAS downloaded multiple times for numpy and scipy once we switch to a better BLAS). Though it would be worth analysing how many dependent packages each shared library has in the current dependency graph, and whether indeed vendoring it inside the wheel would be that significant.
Yes, that could be an issue for some large shared libs like BLAS. In that case probably we need an option in `meta.yaml` to choose whether to vendor shared libs into the wheel or not.
> Maybe listing symbols and checking if there is anything matching `PyInit_*`?
Thanks for the suggestion! That sound like a nice option. But then we should check how much it increases the load time because we have to look at the content of every shared library.
> Isn't an issue with depending on the name that sometimes packages will depend on the extension modules of other packages, e.g., how `reboundx` depends on `rebound`'s (#2909)?
I think it won't be a problem in rebound case. Currently, we load `librebound.so` locally first, but its symbols become globally visible when reboundx calls dlopen (called via `ctypes.cdll.LoadLibrary`) to load `libreboundx.so` and `librebound.so`.
I think I managed to make this (almost) work. I'll split this up into multiple PRs to make review easier. | 2022-11-17T05:24:09 |
|
pyodide/pyodide | 3,364 | pyodide__pyodide-3364 | [
"3363"
] | 36ccf351d18d5183e1688bd2b55f4df7a2fe7030 | diff --git a/pyodide-build/pyodide_build/mkpkg.py b/pyodide-build/pyodide_build/mkpkg.py
--- a/pyodide-build/pyodide_build/mkpkg.py
+++ b/pyodide-build/pyodide_build/mkpkg.py
@@ -16,6 +16,7 @@
from typing import Any, Literal, NoReturn, TypedDict
from urllib import request
+from packaging.version import Version
from ruamel.yaml import YAML
from .common import parse_top_level_import_name
@@ -264,8 +265,8 @@ def update_package(
old_fmt = "sdist"
pypi_metadata = _get_metadata(package, version)
- pypi_ver = pypi_metadata["info"]["version"]
- local_ver = yaml_content["package"]["version"]
+ pypi_ver = Version(pypi_metadata["info"]["version"])
+ local_ver = Version(yaml_content["package"]["version"])
already_up_to_date = pypi_ver <= local_ver and (
source_fmt is None or source_fmt == old_fmt
)
| `pyodide skeleton pypi --update` doesn't compare versions correctly
## π Bug
`pyodide skeleton pypi --update` doesn't compare versions correctly (this also was a problem with the old `make -C packages update-all`).
### To Reproduce
```
/src$ pyodide skeleton pypi --update numcodecs
numcodecs already up to date. Local: 0.9.1 PyPI: 0.11.0
```
### Expected behavior
Update numcodecs to the never v0.11.0
| 2022-12-18T06:36:14 |
||
pyodide/pyodide | 3,377 | pyodide__pyodide-3377 | [
"3375"
] | 28bd91fd2f0b0704b08be3a4f2dd52ec7ed63a6d | diff --git a/docs/sphinx_pyodide/sphinx_pyodide/jsdoc.py b/docs/sphinx_pyodide/sphinx_pyodide/jsdoc.py
--- a/docs/sphinx_pyodide/sphinx_pyodide/jsdoc.py
+++ b/docs/sphinx_pyodide/sphinx_pyodide/jsdoc.py
@@ -66,7 +66,11 @@ def destructure_param(param: dict[str, Any]) -> list[dict[str, Any]]:
child = dict(child)
if "type" not in child:
if "signatures" in child:
- child["comment"] = child["signatures"][0]["comment"]
+ try:
+ child["comment"] = child["signatures"][0]["comment"]
+ except KeyError:
+ # TODO: handle no comment case
+ pass
child["type"] = {
"type": "reflection",
"declaration": dict(child),
| Documentation build failure
Currently, RTD CI is failing following https://github.com/pyodide/pyodide/pull/3268#issuecomment-1359018099 it looks like `sphinx_js` didn't like something in the changes to the the JS package. I can also reproduce locally.
<details>
```
Error: ../src/js/interface.ts:6:34 - error TS2304: Cannot find name 'PyodideInterface'.
6 API.makePublicAPI = function (): PyodideInterface {
~~~~~~~~~~~~~~~~
Error: ../src/js/interface.ts:7:3 - error TS2304: Cannot find name 'FS'.
7 FS = Module.FS;
~~
Error: ../src/js/interface.ts:8:3 - error TS2304: Cannot find name 'PATH'.
8 PATH = Module.PATH;
~~~~
Error: ../src/js/interface.ts:9:3 - error TS2304: Cannot find name 'ERRNO_CODES'.
9 ERRNO_CODES = Module.ERRNO_CODES;
~~~~~~~~~~~
Error: ../src/js/interface.ts:11:5 - error TS2552: Cannot find name 'globals'. Did you mean 'global'?
11 globals,
~~~~~~~
../src/js/node_modules/@types/node/globals.global.d.ts:1:13
1 declare var global: typeof globalThis;
~~~~~~
'global' is declared here.
Error: ../src/js/interface.ts:12:5 - error TS18004: No value exists in scope for the shorthand property 'FS'. Either declare one or provide an initializer.
12 FS,
~~
Error: ../src/js/interface.ts:13:5 - error TS18004: No value exists in scope for the shorthand property 'PATH'. Either declare one or provide an initializer.
13 PATH,
~~~~
Error: ../src/js/interface.ts:14:5 - error TS18004: No value exists in scope for the shorthand property 'ERRNO_CODES'. Either declare one or provide an initializer.
14 ERRNO_CODES,
~~~~~~~~~~~
Error: ../src/js/interface.ts:15:5 - error TS18004: No value exists in scope for the shorthand property 'pyodide_py'. Either declare one or provide an initializer.
15 pyodide_py,
~~~~~~~~~~
Error: ../src/js/interface.ts:16:5 - error TS18004: No value exists in scope for the shorthand property 'version'. Either declare one or provide an initializer.
16 version,
~~~~~~~
Error: ../src/js/interface.ts:17:5 - error TS18004: No value exists in scope for the shorthand property 'loadPackage'. Either declare one or provide an initializer.
17 loadPackage,
~~~~~~~~~~~
Error: ../src/js/interface.ts:18:5 - error TS18004: No value exists in scope for the shorthand property 'isPyProxy'. Either declare one or provide an initializer.
18 isPyProxy,
~~~~~~~~~
Error: ../src/js/interface.ts:19:5 - error TS18004: No value exists in scope for the shorthand property 'runPython'. Either declare one or provide an initializer.
19 runPython,
~~~~~~~~~
Error: ../src/js/interface.ts:20:5 - error TS18004: No value exists in scope for the shorthand property 'runPythonAsync'. Either declare one or provide an initializer.
20 runPythonAsync,
~~~~~~~~~~~~~~
Error: ../src/js/interface.ts:21:5 - error TS18004: No value exists in scope for the shorthand property 'registerJsModule'. Either declare one or provide an initializer.
21 registerJsModule,
~~~~~~~~~~~~~~~~
Error: ../src/js/interface.ts:22:5 - error TS18004: No value exists in scope for the shorthand property 'unregisterJsModule'. Either declare one or provide an initializer.
22 unregisterJsModule,
~~~~~~~~~~~~~~~~~~
Error: ../src/js/interface.ts:23:5 - error TS18004: No value exists in scope for the shorthand property 'setInterruptBuffer'. Either declare one or provide an initializer.
23 setInterruptBuffer,
~~~~~~~~~~~~~~~~~~
Error: ../src/js/interface.ts:24:5 - error TS18004: No value exists in scope for the shorthand property 'checkInterrupt'. Either declare one or provide an initializer.
24 checkInterrupt,
~~~~~~~~~~~~~~
Error: ../src/js/interface.ts:25:5 - error TS18004: No value exists in scope for the shorthand property 'toPy'. Either declare one or provide an initializer.
25 toPy,
~~~~
Error: ../src/js/interface.ts:26:5 - error TS18004: No value exists in scope for the shorthand property 'pyimport'. Either declare one or provide an initializer.
26 pyimport,
~~~~~~~~
Error: ../src/js/interface.ts:27:5 - error TS18004: No value exists in scope for the shorthand property 'unpackArchive'. Either declare one or provide an initializer.
27 unpackArchive,
~~~~~~~~~~~~~
Error: ../src/js/interface.ts:28:5 - error TS18004: No value exists in scope for the shorthand property 'mountNativeFS'. Either declare one or provide an initializer.
28 mountNativeFS,
~~~~~~~~~~~~~
Error: ../src/js/interface.ts:29:5 - error TS18004: No value exists in scope for the shorthand property 'registerComlink'. Either declare one or provide an initializer.
29 registerComlink,
~~~~~~~~~~~~~~~
Error: ../src/js/interface.ts:30:5 - error TS18004: No value exists in scope for the shorthand property 'PythonError'. Either declare one or provide an initializer.
30 PythonError,
~~~~~~~~~~~
Error: ../src/js/interface.ts:31:5 - error TS18004: No value exists in scope for the shorthand property 'PyBuffer'. Either declare one or provide an initializer.
31 PyBuffer,
~~~~~~~~
Traceback (most recent call last):
File "/home/rth/.miniforge/envs/pyodide-env/lib/python3.10/site-packages/sphinx/events.py", line 94, in emit
results.append(listener.handler(self.app, *args))
File "/home/rth/.miniforge/envs/pyodide-env/lib/python3.10/site-packages/sphinx_js/__init__.py", line 60, in analyze
app._sphinxjs_analyzer = analyzer.from_disk(abs_source_paths,
File "/home/rth/.miniforge/envs/pyodide-env/lib/python3.10/site-packages/sphinx_js/typedoc.py", line 37, in from_disk
json = typedoc_output(abs_source_paths,
File "/home/rth/.miniforge/envs/pyodide-env/lib/python3.10/site-packages/sphinx_js/typedoc.py", line 331, in typedoc_output
return load(getreader('utf-8')(temp))
File "/home/rth/.miniforge/envs/pyodide-env/lib/python3.10/json/__init__.py", line 293, in load
return loads(fp.read(),
File "/home/rth/.miniforge/envs/pyodide-env/lib/python3.10/json/__init__.py", line 346, in loads
return _default_decoder.decode(s)
File "/home/rth/.miniforge/envs/pyodide-env/lib/python3.10/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/home/rth/.miniforge/envs/pyodide-env/lib/python3.10/json/decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/rth/.miniforge/envs/pyodide-env/lib/python3.10/site-packages/sphinx/cmd/build.py", line 272, in build_main
app = Sphinx(args.sourcedir, args.confdir, args.outputdir,
File "/home/rth/.miniforge/envs/pyodide-env/lib/python3.10/site-packages/sphinx/application.py", line 256, in __init__
self._init_builder()
File "/home/rth/.miniforge/envs/pyodide-env/lib/python3.10/site-packages/sphinx/application.py", line 315, in _init_builder
self.events.emit('builder-inited')
File "/home/rth/.miniforge/envs/pyodide-env/lib/python3.10/site-packages/sphinx/events.py", line 102, in emit
raise ExtensionError(__("Handler %r for event %r threw an exception") %
sphinx.errors.ExtensionError: Handler <function analyze at 0x7f5269aa0ca0> for event 'builder-inited' threw an exception (exception: Expecting value: line 1 column 1 (char 0))
Extension error (sphinx_js):
Handler <function analyze at 0x7f5269aa0ca0> for event 'builder-inited' threw an exception (exception: Expecting value: line 1 column 1 (char 0))
make: *** [Makefile:19: html] Error 2
```
</details>
the other issue (that we cannot do much about) is that when RTD fails it often doesn't show status in particular on main. The build log is in https://readthedocs.org/projects/pyodide/builds/
cc @hoodmane
| 2022-12-20T10:58:17 |
||
pyodide/pyodide | 3,435 | pyodide__pyodide-3435 | [
"3423"
] | 9323d19dc2e48a79bd31f2fa98dcd6b7803c8260 | diff --git a/pyodide-build/pyodide_build/buildall.py b/pyodide-build/pyodide_build/buildall.py
--- a/pyodide-build/pyodide_build/buildall.py
+++ b/pyodide-build/pyodide_build/buildall.py
@@ -571,6 +571,7 @@ def generate_packagedata(
"file_name": pkg.file_name,
"install_dir": pkg.install_dir,
"sha256": _generate_package_hash(Path(output_dir, pkg.file_name)),
+ "imports": [],
}
pkg_type = pkg.package_type
@@ -582,9 +583,11 @@ def generate_packagedata(
)
pkg_entry["depends"] = [x.lower() for x in pkg.run_dependencies]
- pkg_entry["imports"] = (
- pkg.meta.package.top_level if pkg.meta.package.top_level else [name]
- )
+
+ if pkg.package_type not in ("static_library", "shared_library"):
+ pkg_entry["imports"] = (
+ pkg.meta.package.top_level if pkg.meta.package.top_level else [name]
+ )
packages[name.lower()] = pkg_entry
diff --git a/src/py/_pyodide/_importhook.py b/src/py/_pyodide/_importhook.py
--- a/src/py/_pyodide/_importhook.py
+++ b/src/py/_pyodide/_importhook.py
@@ -134,26 +134,15 @@ def register_js_finder() -> None:
STDLIBS = sys.stdlib_module_names | {"test"}
-UNVENDORED_STDLIBS_MAP = {
- "distutils": ["distutils"],
- "ssl": ["ssl", "_ssl"],
- "lzma": ["lzma", "_lzma"],
- "sqlite3": ["sqlite3", "_sqlite3"],
- "hashlib": ["_hashlib"],
-}
-UNVENDORED_STDLIBS = list(UNVENDORED_STDLIBS_MAP.keys())
+# TODO: Move this list to js side
+UNVENDORED_STDLIBS = ["distutils", "ssl", "lzma", "sqlite3", "hashlib"]
UNVENDORED_STDLIBS_AND_TEST = UNVENDORED_STDLIBS + ["test"]
-UNVENDORED_STDLIBS_MODULES = {
- importname: pkgname
- for pkgname, importnames in UNVENDORED_STDLIBS_MAP.items()
- for importname in importnames
-} | {"test": "test"}
from importlib import _bootstrap # type: ignore[attr-defined]
orig_get_module_not_found_error: Any = None
-REPODATA_PACKAGES: list[str] = []
+REPODATA_PACKAGES_IMPORT_TO_PACKAGE_NAME: dict[str, str] = {}
SEE_PACKAGE_LOADING = (
"\nSee https://pyodide.org/en/stable/usage/loading-packages.html for more details."
@@ -161,29 +150,34 @@ def register_js_finder() -> None:
YOU_CAN_INSTALL_IT_BY = """
You can install it by calling:
- await micropip.install("{name}") in Python, or
- await pyodide.loadPackage("{name}") in JavaScript\
+ await micropip.install("{package_name}") in Python, or
+ await pyodide.loadPackage("{package_name}") in JavaScript\
"""
-def get_module_not_found_error(name):
- if name not in REPODATA_PACKAGES and name not in STDLIBS:
- return orig_get_module_not_found_error(name)
+def get_module_not_found_error(import_name):
- if name in UNVENDORED_STDLIBS_MODULES:
- msg = "The module '{name}' is unvendored from the Python standard library in the Pyodide distribution."
- msg += YOU_CAN_INSTALL_IT_BY
- name = UNVENDORED_STDLIBS_MODULES[name]
- elif name in REPODATA_PACKAGES:
- msg = "The module '{name}' is included in the Pyodide distribution, but it is not installed."
+ package_name = REPODATA_PACKAGES_IMPORT_TO_PACKAGE_NAME.get(import_name, "")
+
+ if not package_name and import_name not in STDLIBS:
+ return orig_get_module_not_found_error(import_name)
+
+ if package_name in UNVENDORED_STDLIBS_AND_TEST:
+ msg = "The module '{package_name}' is unvendored from the Python standard library in the Pyodide distribution."
msg += YOU_CAN_INSTALL_IT_BY
- else:
+ elif import_name in STDLIBS:
msg = (
- "The module '{name}' is removed from the Python standard library in the"
+ "The module '{import_name}' is removed from the Python standard library in the"
" Pyodide distribution due to browser limitations."
)
+ else:
+ msg = "The module '{package_name}' is included in the Pyodide distribution, but it is not installed."
+ msg += YOU_CAN_INSTALL_IT_BY
+
msg += SEE_PACKAGE_LOADING
- return ModuleNotFoundError(msg.format(name=name))
+ return ModuleNotFoundError(
+ msg.format(import_name=import_name, package_name=package_name)
+ )
def register_module_not_found_hook(packages: Any) -> None:
@@ -194,48 +188,7 @@ def register_module_not_found_hook(packages: Any) -> None:
in order to prevent any unexpected side effects.
"""
global orig_get_module_not_found_error
- global REPODATA_PACKAGES
- REPODATA_PACKAGES = packages.to_py()
+ global REPODATA_PACKAGES_IMPORT_TO_PACKAGE_NAME
+ REPODATA_PACKAGES_IMPORT_TO_PACKAGE_NAME = packages.to_py()
orig_get_module_not_found_error = _bootstrap._get_module_not_found_error
_bootstrap._get_module_not_found_error = get_module_not_found_error
-
-
-class RepodataPackagesFinder(MetaPathFinder):
- """
- A MetaPathFinder that handles packages in repodata.json.
-
- This class simply raises an error if a package is in repodata.json but not loaded yet.
- This needs to be added to the end of sys.meta_path, so if a package
- is already loaded via pyodide.loadPackage, it can be handled by the existing finder.
- """
-
- def __init__(self, packages: dict[str, Any]) -> None:
- self.repodata_packages = packages
-
- def find_spec(
- self,
- fullname: str,
- path: Sequence[bytes | str] | None,
- target: ModuleType | None = None,
- ) -> ModuleSpec | None:
- [parent, _, _] = fullname.partition(".")
-
- if not parent or parent in sys.modules or parent not in self.repodata_packages:
- return None
-
- return None
-
-
-def register_repodata_packages_finder(packages: Any) -> None:
- """
- A function that adds RepodataPackagesFinder to the end of sys.meta_path.
-
- Note that this finder must be placed in the end of meta_paths
- in order to prevent any unexpected side effects.
- """
-
- for importer in sys.meta_path:
- if isinstance(importer, RepodataPackagesFinder):
- raise RuntimeError("RepodataPackagesFinder already registered")
-
- sys.meta_path.append(RepodataPackagesFinder(packages.to_py()))
| diff --git a/pyodide-build/pyodide_build/tests/_test_recipes/libtest_shared/meta.yaml b/pyodide-build/pyodide_build/tests/_test_recipes/libtest_shared/meta.yaml
new file mode 100644
--- /dev/null
+++ b/pyodide-build/pyodide_build/tests/_test_recipes/libtest_shared/meta.yaml
@@ -0,0 +1,9 @@
+package:
+ name: libtest_shared
+ version: 1.0.0
+source:
+ sha256: deadbeefdeadbeefdeadbeefdeadbeefdeadbeefdeadbeefdeadbeefdeadbeef
+ url: https://dummy-url-that-not-exists.com/libtest.tar.gz
+build:
+ type: shared_library
+ script: "pass"
diff --git a/pyodide-build/pyodide_build/tests/_test_recipes/pkg_1/meta.yaml b/pyodide-build/pyodide_build/tests/_test_recipes/pkg_1/meta.yaml
--- a/pyodide-build/pyodide_build/tests/_test_recipes/pkg_1/meta.yaml
+++ b/pyodide-build/pyodide_build/tests/_test_recipes/pkg_1/meta.yaml
@@ -8,9 +8,11 @@ requirements:
host:
- pkg_1_1
- pkg_3
+ - libtest_shared
run:
- pkg_1_1
- pkg_3
+ - libtest_shared
test:
imports:
- pkg_1
diff --git a/pyodide-build/pyodide_build/tests/test_buildall.py b/pyodide-build/pyodide_build/tests/test_buildall.py
--- a/pyodide-build/pyodide_build/tests/test_buildall.py
+++ b/pyodide-build/pyodide_build/tests/test_buildall.py
@@ -53,7 +53,7 @@ def from_yaml(cls, path):
def test_generate_repodata(tmp_path):
pkg_map = buildall.generate_dependency_graph(
- RECIPE_DIR, {"pkg_1", "pkg_2", "libtest"}
+ RECIPE_DIR, {"pkg_1", "pkg_2", "libtest", "libtest_shared"}
)
hashes = {}
for pkg in pkg_map.values():
@@ -77,17 +77,23 @@ def test_generate_repodata(tmp_path):
"pkg_2",
"pkg_3",
"pkg_3_1",
+ "libtest_shared",
}
assert package_data["packages"]["pkg_1"] == {
"name": "pkg_1",
"version": "1.0.0",
"file_name": "pkg_1.whl",
- "depends": ["pkg_1_1", "pkg_3"],
+ "depends": ["pkg_1_1", "pkg_3", "libtest_shared"],
"imports": ["pkg_1"],
"install_dir": "site",
"sha256": hashes["pkg_1"],
}
+ sharedlib_imports = package_data["packages"]["libtest_shared"]["imports"]
+ assert not sharedlib_imports, (
+ "shared libraries should not have any imports, but got " f"{sharedlib_imports}"
+ )
+
@pytest.mark.parametrize("n_jobs", [1, 4])
def test_build_dependencies(n_jobs, monkeypatch):
@@ -111,6 +117,7 @@ def build(self, args: Any) -> None:
"pkg_2",
"pkg_3",
"pkg_3_1",
+ "libtest_shared",
}
assert build_list.index("pkg_1_1") < build_list.index("pkg_1")
assert build_list.index("pkg_3") < build_list.index("pkg_1")
diff --git a/src/tests/test_pyodide.py b/src/tests/test_pyodide.py
--- a/src/tests/test_pyodide.py
+++ b/src/tests/test_pyodide.py
@@ -1424,6 +1424,14 @@ def test_module_not_found_hook(selenium_standalone):
with pytest.raises(ModuleNotFoundError, match="No module named"):
importlib.import_module("pytest.there_is_no_such_module")
+ # liblzma and openssl are libraries not python packages, so it should just fail.
+ for pkg in ["liblzma", "openssl"]:
+ with pytest.raises(ModuleNotFoundError, match="No module named"):
+ importlib.import_module(pkg)
+
+ with pytest.raises(ModuleNotFoundError, match=r'loadPackage\("hashlib"\)'):
+ importlib.import_module("_hashlib")
+
def test_args(selenium_standalone_noload):
selenium = selenium_standalone_noload
| Import hook for repodata packages shows incorrect results.
## π Bug
The import hook for repodata packages is showing incorrect results for some packages.
For example, shown in https://github.com/pyodide/pyodide/discussions/3419:
```py
import gdal # from osgeo import gdal
PythonError: Traceback (most recent call last):
File "/lib/python3.10/_pyodide/_base.py", line 460, in eval_code
.run(globals, locals)
File "/lib/python3.10/_pyodide/_base.py", line 304, in run
coroutine = eval(self.code, globals, locals)
File "<exec>", line 8, in <module>
ModuleNotFoundError: The module 'gdal' is included in the Pyodide distribution, but it is not installed.
```
This is not intended, as `gdal` is a shared library not a Python package (at least until we have Python bindings for gdal).
I'll make a fix, this is mostly a FYI.
| 2023-01-07T10:14:50 |
|
pyodide/pyodide | 3,483 | pyodide__pyodide-3483 | [
"3430"
] | af4158da778c94f09e563bcfd37cfd9273e4517f | diff --git a/pyodide-build/pyodide_build/cli/config.py b/pyodide-build/pyodide_build/cli/config.py
--- a/pyodide-build/pyodide_build/cli/config.py
+++ b/pyodide-build/pyodide_build/cli/config.py
@@ -19,7 +19,7 @@ def callback() -> None:
def _get_configs() -> dict[str, str]:
- initialize_pyodide_root()
+ initialize_pyodide_root(quiet=True)
configs: dict[str, str] = get_make_environment_vars()
diff --git a/pyodide-build/pyodide_build/out_of_tree/utils.py b/pyodide-build/pyodide_build/out_of_tree/utils.py
--- a/pyodide-build/pyodide_build/out_of_tree/utils.py
+++ b/pyodide-build/pyodide_build/out_of_tree/utils.py
@@ -1,10 +1,12 @@
import os
+from contextlib import ExitStack, redirect_stdout
+from io import StringIO
from pathlib import Path
from ..common import search_pyodide_root
-def ensure_env_installed(env: Path) -> None:
+def ensure_env_installed(env: Path, *, quiet: bool = False) -> None:
if env.exists():
return
from .. import __version__
@@ -15,11 +17,16 @@ def ensure_env_installed(env: Path) -> None:
"To use out of tree builds with development Pyodide, you must explicitly set PYODIDE_ROOT"
)
- download_xbuildenv(__version__, env)
- install_xbuildenv(__version__, env)
+ with ExitStack() as stack:
+ if quiet:
+ # Prevent writes to stdout
+ stack.enter_context(redirect_stdout(StringIO()))
+ download_xbuildenv(__version__, env)
+ install_xbuildenv(__version__, env)
-def initialize_pyodide_root() -> None:
+
+def initialize_pyodide_root(*, quiet: bool = False) -> None:
if "PYODIDE_ROOT" in os.environ:
return
try:
@@ -29,4 +36,4 @@ def initialize_pyodide_root() -> None:
pass
env = Path(".pyodide-xbuildenv")
os.environ["PYODIDE_ROOT"] = str(env / "xbuildenv/pyodide-root")
- ensure_env_installed(env)
+ ensure_env_installed(env, quiet=quiet)
| On first call, `pyodide config get emscripten_version` returns `Downloading xbuild environment Installing xbuild environment 3.1.27` instead of `3.1.27`
## π Bug
In [the docs for out-of-tree builds](https://pyodide.org/en/stable/development/building-and-testing-packages.html#building-and-testing-packages-out-of-tree) it gives this code snippet:
```bash
pip install pyodide-build
git clone https://github.com/emscripten-core/emsdk.git
cd emsdk
PYODIDE_EMSCRIPTEN_VERSION=$(pyodide config get emscripten_version)
./emsdk install ${PYODIDE_EMSCRIPTEN_VERSION}
./emsdk activate ${PYODIDE_EMSCRIPTEN_VERSION}
source emsdk_env.sh
```
But this doesn't work because on the first call, `pyodide config get emscripten_version` outputs this:
```
Downloading xbuild environment
Installing xbuild environment
3.1.27
```
On subsequent calls it returns `3.1.27`.
### To Reproduce
See above.
### Expected behavior
Calls to `pyodide config get emscripten_version` should only ever output the version string such that this command can be reliably used in build automation.
### Environment
- Pyodide Version<!-- (e.g. 1.8.1) -->: Pyodide CLI Version: 0.2.2
### Additional context
As a workaround for build scripts, `pyodide config get emscripten_version` can be called once before actually using it.
| Thanks for the report @josephrocca! I guess we need to be more careful when printing log messages to stdout...
Is this now fixed by https://github.com/pyodide/pyodide/pull/3442?
No, it is not fixed yet. Probably I'll make a fix in a few days, but I got COVID-19 so I'm not in a very good condition to code :<
Hope you feel better soon @ryanking13!
This might be a slightly personal question @ryanking13, but are you doing OK? I hope you have a support network with you. Sending fast-recovery thoughts your way!
@agoose77 Yeah, I am totally okay, thanks for asking! I am just experiencing some typical omicron symptoms, but it is getting better quickly. | 2023-01-22T03:02:35 |
|
pyodide/pyodide | 3,485 | pyodide__pyodide-3485 | [
"3472"
] | 1b2375f76cdfda7dc0927f3701bac4b6b527c643 | diff --git a/pyodide-build/pyodide_build/out_of_tree/venv.py b/pyodide-build/pyodide_build/out_of_tree/venv.py
--- a/pyodide-build/pyodide_build/out_of_tree/venv.py
+++ b/pyodide-build/pyodide_build/out_of_tree/venv.py
@@ -5,13 +5,7 @@
from pathlib import Path
from typing import Any
-from ..common import (
- check_emscripten_version,
- exit_with_stdio,
- get_make_flag,
- get_pyodide_root,
- in_xbuildenv,
-)
+from ..common import exit_with_stdio, get_make_flag, get_pyodide_root, in_xbuildenv
from ..logger import logger
@@ -224,8 +218,6 @@ def create_pyodide_venv(dest: Path) -> None:
logger.error(f"ERROR: dest directory '{dest}' already exists")
sys.exit(1)
- check_emscripten_version()
-
interp_path = pyodide_dist_dir() / "python"
session = session_via_cli(["--no-wheel", "-p", str(interp_path), str(dest)])
check_host_python_version(session)
| `pyodide_build.out_of_tree.venv.create_pyodide_venv` wrongly requires an Emscripten compiler
See https://github.com/pyodide/pyodide/discussions/3462#discussioncomment-4710208
| 2023-01-22T04:32:13 |
||
pyodide/pyodide | 3,495 | pyodide__pyodide-3495 | [
"3493"
] | d3c2a6dfa854e9e00ae862e021332db9772ef8d9 | diff --git a/pyodide-build/pyodide_build/buildpkg.py b/pyodide-build/pyodide_build/buildpkg.py
--- a/pyodide-build/pyodide_build/buildpkg.py
+++ b/pyodide-build/pyodide_build/buildpkg.py
@@ -27,11 +27,11 @@
from . import common, pypabuild
from .common import (
- BUILD_VARS,
chdir,
exit_with_stdio,
find_matching_wheels,
find_missing_executables,
+ set_build_environment,
)
from .io import MetaConfig, _BuildSpec, _SourceSpec
from .logger import logger
@@ -117,20 +117,8 @@ def __exit__(
@contextmanager
def get_bash_runner() -> Iterator[BashRunnerWithSharedEnvironment]:
PYODIDE_ROOT = os.environ["PYODIDE_ROOT"]
- env = {key: os.environ[key] for key in BUILD_VARS} | {"PYODIDE": "1"}
- if "PYODIDE_JOBS" in os.environ:
- env["PYODIDE_JOBS"] = os.environ["PYODIDE_JOBS"]
-
- env["PKG_CONFIG_PATH"] = env["WASM_PKG_CONFIG_PATH"]
- if "PKG_CONFIG_PATH" in os.environ:
- env["PKG_CONFIG_PATH"] += f":{os.environ['PKG_CONFIG_PATH']}"
-
- tools_dir = Path(__file__).parent / "tools"
-
- env["CMAKE_TOOLCHAIN_FILE"] = str(
- tools_dir / "cmake/Modules/Platform/Emscripten.cmake"
- )
- env["PYO3_CONFIG_FILE"] = str(tools_dir / "pyo3_config.ini")
+ env: dict[str, str] = {}
+ set_build_environment(env)
with BashRunnerWithSharedEnvironment(env=env) as b:
b.run(f"source {PYODIDE_ROOT}/pyodide_env.sh", stderr=subprocess.DEVNULL)
diff --git a/pyodide-build/pyodide_build/common.py b/pyodide-build/pyodide_build/common.py
--- a/pyodide-build/pyodide_build/common.py
+++ b/pyodide-build/pyodide_build/common.py
@@ -459,3 +459,25 @@ def chdir(new_dir: Path) -> Generator[None, None, None]:
yield
finally:
os.chdir(orig_dir)
+
+
+def set_build_environment(env: dict[str, str]) -> None:
+ """Assign build environment variables to env.
+
+ Sets common environment between in tree and out of tree package builds.
+ """
+ env.update({key: os.environ[key] for key in BUILD_VARS})
+ env["PYODIDE"] = "1"
+ if "PYODIDE_JOBS" in os.environ:
+ env["PYODIDE_JOBS"] = os.environ["PYODIDE_JOBS"]
+
+ env["PKG_CONFIG_PATH"] = env["WASM_PKG_CONFIG_PATH"]
+ if "PKG_CONFIG_PATH" in os.environ:
+ env["PKG_CONFIG_PATH"] += f":{os.environ['PKG_CONFIG_PATH']}"
+
+ tools_dir = Path(__file__).parent / "tools"
+
+ env["CMAKE_TOOLCHAIN_FILE"] = str(
+ tools_dir / "cmake/Modules/Platform/Emscripten.cmake"
+ )
+ env["PYO3_CONFIG_FILE"] = str(tools_dir / "pyo3_config.ini")
diff --git a/pyodide-build/pyodide_build/out_of_tree/build.py b/pyodide-build/pyodide_build/out_of_tree/build.py
--- a/pyodide-build/pyodide_build/out_of_tree/build.py
+++ b/pyodide-build/pyodide_build/out_of_tree/build.py
@@ -14,9 +14,11 @@ def run(exports: Any, args: list[str], outdir: Path | None = None) -> Path:
cxxflags += f" {os.environ.get('CXXFLAGS', '')}"
ldflags = common.get_make_flag("SIDE_MODULE_LDFLAGS")
ldflags += f" {os.environ.get('LDFLAGS', '')}"
+ env = os.environ.copy()
+ common.set_build_environment(env)
build_env_ctx = pypabuild.get_build_env(
- env=os.environ.copy(),
+ env=env,
pkgname="",
cflags=cflags,
cxxflags=cxxflags,
| Detecting Pyodide, at build time
## π Documentation
The current documentation reads:
To detect Pyodide, at build time use,
```
import os
if "PYODIDE" in os.environ:
# building for Pyodide
```
This used to work on my computer using pyodide-build v0.21.
When I tried it using pyodide-build v0.22 however, the test `if "PYODIDE" in os.environ` returned false.
Inspecting my `os.environ`, I changed the test to `if ("PYODIDE_ROOT" in os.environ):`, and my setup.py detects when the code is being built with pyodide.
This lead me to believe the docs needed updating. However, searching the pyodide repo, I saw `if "PYODIDE" in os.environ` in the following patch file: https://github.com/pyodide/pyodide/blob/main/packages/Pillow/patches/0001-Enable-image-formats.patch
So I'm not really sure anymore ...
| For in-tree build the PYODIDE env variable is still [set here](https://github.com/pyodide/pyodide/blob/main/Makefile.envs#L45) and also set in [get_bash_runner](https://github.com/pyodide/pyodide/blob/main/pyodide-build/pyodide_build/buildpkg.py#L120), so that should work.
For out of tree builds with `pyodide build` it does indeed looks like it's not set . The Makefile.env shipped in `xbuildenv-0.22.0.tar.bz2 ` is never loaded, unless I'm missing something. Maybe @hoodmane could confirm.
If that's true, we could add it [here](https://github.com/pyodide/pyodide/blob/3c92574c57e8a57d5e2b081e9a19b01cfc9b3745/pyodide-build/pyodide_build/out_of_tree/build.py#L19) I think:
```py
env_vars = os.environ.copy()
env_vars['PYODIDE'] = 1
build_env_ctx = pypabuild.get_build_env(
...,
env=env_vars
)
```
I think we should extract these lines:
https://github.com/pyodide/pyodide/blob/main/pyodide-build/pyodide_build/buildpkg.py#L119-L133
into a separate function in `common.py`. Or at least whatever part of it should be shared with `pyodide build`. | 2023-01-23T20:56:08 |
|
pyodide/pyodide | 3,498 | pyodide__pyodide-3498 | [
"3433"
] | 036f9cb5454479ffd66368f7a5e44288a1934163 | diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -19,12 +19,6 @@
project = "Pyodide"
copyright = "2019-2022, Pyodide contributors and Mozilla"
-pyodide_version = "0.22.0"
-
-if ".dev" in pyodide_version or os.environ.get("READTHEDOCS_VERSION") == "latest":
- CDN_URL = "https://cdn.jsdelivr.net/pyodide/dev/full/"
-else:
- CDN_URL = f"https://cdn.jsdelivr.net/pyodide/v{pyodide_version}/full/"
# -- General configuration ---------------------------------------------------
@@ -133,20 +127,14 @@
# A list of files that should not be packed into the epub file.
epub_exclude_files = ["search.html"]
-
-def delete_attrs(cls):
- """Prevent attributes of a class or module from being documented.
-
- The top level documentation comment of the class or module will still be
- rendered.
- """
- for name in dir(cls):
- if not name.startswith("_"):
- try:
- delattr(cls, name)
- except Exception:
- pass
-
+base_dir = Path(__file__).resolve().parent.parent
+extra_sys_path_dirs = [
+ str(base_dir),
+ str(base_dir / "pyodide-build"),
+ str(base_dir / "docs/sphinx_pyodide"),
+ str(base_dir / "src/py"),
+ str(base_dir / "packages/micropip/src"),
+]
# Try not to cause side effects if we are imported incidentally.
@@ -159,7 +147,39 @@ def delete_attrs(cls):
IN_READTHEDOCS = "READTHEDOCS" in os.environ
+if IN_SPHINX:
+ sys.path = extra_sys_path_dirs + sys.path
+ import builtins
+
+ from sphinx_pyodide.util import docs_argspec
+
+ # override docs_argspec, _pyodide.docs_argspec will read this value back.
+ # Must do this before importing pyodide!
+ setattr(builtins, "--docs_argspec--", docs_argspec)
+
+ # Monkey patch for python3.11 incompatible code
+ import inspect
+
+ if not hasattr(inspect, "getargspec"):
+ inspect.getargspec = inspect.getfullargspec # type: ignore[assignment]
+
+import pyodide
+
+# The full version, including alpha/beta/rc tags.
+release = version = pyodide.__version__
+
+if ".dev" in version or os.environ.get("READTHEDOCS_VERSION") == "latest":
+ CDN_URL = "https://cdn.jsdelivr.net/pyodide/dev/full/"
+else:
+ CDN_URL = f"https://cdn.jsdelivr.net/pyodide/v{version}/full/"
+
+html_title = f"Version {version}"
+
+global_replacements = {"{{PYODIDE_CDN_URL}}": CDN_URL, "{{VERSION}}": version}
+
+
if IN_READTHEDOCS:
+ # Make console.html file
env = {"PYODIDE_BASE_URL": CDN_URL}
os.makedirs("_build/html", exist_ok=True)
res = subprocess.check_output(
@@ -185,81 +205,15 @@ def delete_attrs(cls):
console_path.write_text("".join(console_html))
-def docs_argspec(argspec: str) -> Any:
- """Decorator to override the argument spec of the function that is
- rendered in the docs.
-
- If the documentation finds a __wrapped__ attribute, it will use that to
- render the argspec instead of the function itself. Assign an appropriate
- fake function to __wrapped__ with the desired argspec.
- """
-
- def dec(func):
- d = func.__globals__
- TEMP_NAME = "_xxxffff___"
- assert TEMP_NAME not in d
- code = dedent(
- f"""\
- def {TEMP_NAME}{argspec}:
- pass
- """
- )
- # exec in func.__globals__ context so that type hints don't case NameErrors.
- exec(code, d)
- f = d.pop(TEMP_NAME)
-
- # # Run update_wrapper but keep f's annotations and don't set
- # # f.__wrapped__ (would cause an infinite loop!)
- annotations = f.__annotations__
- update_wrapper(f, func)
- f.__annotations__ = annotations
- del f.__wrapped__
-
- # Set wrapper to be our fake function
- func.__wrapped__ = f
-
- return func
-
- return dec
-
-
if IN_SPHINX:
- import inspect
-
- if not hasattr(inspect, "getargspec"):
- inspect.getargspec = inspect.getfullargspec # type: ignore[assignment]
-
- base_dir = Path(__file__).resolve().parent.parent
- path_dirs = [
- str(base_dir),
- str(base_dir / "pyodide-build"),
- str(base_dir / "docs/sphinx_pyodide"),
- str(base_dir / "src/py"),
- str(base_dir / "packages/micropip/src"),
- ]
- sys.path = path_dirs + sys.path
-
from sphinx.domains.javascript import JavaScriptDomain, JSXRefRole
JavaScriptDomain.roles["class"] = JSXRefRole()
- import builtins
- from functools import update_wrapper
- from textwrap import dedent
-
- # override docs_argspec, _pyodide.docs_argspec will read this value back.
- # Must do this before importing pyodide!
- setattr(builtins, "--docs_argspec--", docs_argspec)
-
- import pyodide
from pyodide.ffi import JsProxy
del JsProxy.__new__
- # The full version, including alpha/beta/rc tags.
- release = version = pyodide.__version__
- html_title = f"Version {version}"
-
shutil.copy("../src/core/pyproxy.ts", "../src/js/pyproxy.gen.ts")
shutil.copy("../src/core/error_handling.ts", "../src/js/error_handling.gen.ts")
js_source_path = [str(x) for x in Path("../src/js").glob("*.ts")]
@@ -280,6 +234,8 @@ def remove_pyproxy_gen_ts():
# Prevent API docs for webloop methods: they are the same as for base event loop
# and it clutters api docs too much
+ from sphinx_pyodide.util import delete_attrs
+
import pyodide.console
import pyodide.webloop
@@ -299,8 +255,6 @@ def globalReplace(app, docname, source):
source[0] = result
-global_replacements = {"{{PYODIDE_CDN_URL}}": CDN_URL}
-
always_document_param_types = True
diff --git a/docs/sphinx_pyodide/sphinx_pyodide/util.py b/docs/sphinx_pyodide/sphinx_pyodide/util.py
new file mode 100644
--- /dev/null
+++ b/docs/sphinx_pyodide/sphinx_pyodide/util.py
@@ -0,0 +1,55 @@
+from functools import update_wrapper
+from textwrap import dedent
+from typing import Any
+
+
+def delete_attrs(cls):
+ """Prevent attributes of a class or module from being documented.
+
+ The top level documentation comment of the class or module will still be
+ rendered.
+ """
+ for name in dir(cls):
+ if not name.startswith("_"):
+ try:
+ delattr(cls, name)
+ except Exception:
+ pass
+
+
+def docs_argspec(argspec: str) -> Any:
+ """Decorator to override the argument spec of the function that is
+ rendered in the docs.
+
+ If the documentation finds a __wrapped__ attribute, it will use that to
+ render the argspec instead of the function itself. Assign an appropriate
+ fake function to __wrapped__ with the desired argspec.
+ """
+
+ def dec(func):
+ d = func.__globals__
+ TEMP_NAME = "_xxxffff___"
+ assert TEMP_NAME not in d
+ code = dedent(
+ f"""\
+ def {TEMP_NAME}{argspec}:
+ pass
+ """
+ )
+ # exec in func.__globals__ context so that type hints don't case NameErrors.
+ exec(code, d)
+ f = d.pop(TEMP_NAME)
+
+ # # Run update_wrapper but keep f's annotations and don't set
+ # # f.__wrapped__ (would cause an infinite loop!)
+ annotations = f.__annotations__
+ update_wrapper(f, func)
+ f.__annotations__ = annotations
+ del f.__wrapped__
+
+ # Set wrapper to be our fake function
+ func.__wrapped__ = f
+
+ return func
+
+ return dec
diff --git a/tools/bump_version.py b/tools/bump_version.py
--- a/tools/bump_version.py
+++ b/tools/bump_version.py
@@ -49,11 +49,6 @@ def build_version_pattern(pattern):
pattern=build_version_pattern('__version__ = "{python_version}"'),
prerelease=True,
),
- Target(
- ROOT / "docs/conf.py",
- build_version_pattern('pyodide_version = "{python_version}"'),
- prerelease=True,
- ),
Target(
ROOT / "docs/project/about.md",
build_version_pattern(r"version\s*=\s*{{{python_version}}}"),
| Github release file description lacking in docs for v0.22
## π Documentation
<!--
For typos and doc fixes, please go ahead and:
1. Create an issue (You can skip this if it is a very simple fix).
1. Fix the documentation.
1. Submit a PR.
Note: If there are no changes in code, we recommend you to append `[skip ci]` in your commit message to avoid starting the build CI.
Thanks!
-->
I noted that in the recently released 0.22 version of Pyodide the text for GitHub releases is wrong: https://pyodide.org/en/stable/usage/downloading-and-deploying.html#github-releases
It seems that now there are two files, `pyodide-0.22.0.tar.bz2` and `pyodide-core-0.22.0.tar.bz2`, but Im not sure when I need which file. It would be great if it was written down. In addition, maybe the next section in the documentation would also need to be updated? (https://pyodide.org/en/stable/usage/downloading-and-deploying.html#serving-locally)
| Thanks for the report, this should indeed be updated. `pyodide` has all packages, `pyodide-core` has the minimal set of packages needed to run the Pyodide core test suite. The `pyodide-core` distribution is intended primarily for use in node. | 2023-01-24T02:51:42 |
|
pyodide/pyodide | 3,500 | pyodide__pyodide-3500 | [
"3327"
] | 3f845c87a1fbf7a95018dfc7dbd05079b6639efd | diff --git a/pyodide-build/pyodide_build/pywasmcross.py b/pyodide-build/pyodide_build/pywasmcross.py
--- a/pyodide-build/pyodide_build/pywasmcross.py
+++ b/pyodide-build/pyodide_build/pywasmcross.py
@@ -285,6 +285,8 @@ def replay_genargs_handle_argument(arg: str) -> str | None:
# gcc flag that clang does not support
"-Bsymbolic-functions",
'-fno-second-underscore',
+ '-fstack-protector', # doesn't work?
+ '-fno-strict-overflow', # warning: argument unused during compilation
]:
return None
# fmt: on
| Please support pynacl
## π Package Request
- Package Name and Version: pynacl==latest
- Package URL: https://pypi.org/project/PyNaCl/
I have a user of [Awesome Panel Sharing](https://awesome-panel.org/sharing) that would like to be able to use [PyGithub](https://github.com/PyGithub/PyGithub) in his Panel Data App. See [panel-sharing #79](https://github.com/awesome-panel/awesome-panel/issues/79). This package depends on `pynacl`.

## Additional Context
I see in the request for Cryptography https://github.com/pyodide/pyodide/issues/761 that pynacl was also mentioned and there is some relation.
Thanks for the great work on Pyodide and bringing scientific Python to the browser.
| 2023-01-24T03:45:01 |
||
pyodide/pyodide | 3,533 | pyodide__pyodide-3533 | [
"3513"
] | c63356598307d432da1248e51e71c463bc0f7e43 | diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -37,7 +37,6 @@
"autodocsumm",
"sphinx_pyodide",
"sphinx_argparse_cli",
- "versionwarning.extension",
"sphinx_issues",
"sphinx_autodoc_typehints",
"sphinx_design", # Used for tabs in building-from-sources.md
@@ -51,15 +50,12 @@
root_for_relative_js_paths = "../src/"
issues_github_path = "pyodide/pyodide"
-versionwarning_messages = {
- "latest": (
- "This is the development version of the documentation. "
- 'See <a href="https://pyodide.org/">here</a> for latest stable '
- "documentation. Please do not use Pyodide with non "
- "versioned (`dev`) URLs from the CDN for deployed applications!"
- )
-}
-versionwarning_body_selector = "#main-content > div"
+versionwarning_message = (
+ "This is the development version of the documentation. "
+ 'See <a href="https://pyodide.org/">here</a> for latest stable '
+ "documentation. Please do not use Pyodide with non "
+ "versioned (`dev`) URLs from the CDN for deployed applications!"
+)
autosummary_generate = True
autodoc_default_flags = ["members", "inherited-members"]
@@ -105,7 +101,9 @@
html_logo = "_static/img/pyodide-logo.png"
# theme-specific options
-html_theme_options: dict[str, Any] = {}
+html_theme_options: dict[str, Any] = {
+ "announcement": "",
+}
# paths that contain custom static files (such as style sheets)
html_static_path = ["_static"]
@@ -131,6 +129,9 @@
IN_SPHINX = "sphinx" in sys.modules and hasattr(sys.modules["sphinx"], "application")
IN_READTHEDOCS = "READTHEDOCS" in os.environ
+IN_READTHEDOCS_LATEST = (
+ IN_READTHEDOCS and os.environ.get("READTHEDOCS_VERSION") == "latest"
+)
base_dir = Path(__file__).resolve().parent.parent
@@ -203,6 +204,12 @@ def calculate_pyodide_version(app):
}
+def set_announcement_message():
+ html_theme_options["announcement"] = (
+ versionwarning_message if IN_READTHEDOCS_LATEST else ""
+ )
+
+
def write_console_html(app):
# Make console.html file
env = {"PYODIDE_BASE_URL": app.config.CDN_URL}
@@ -327,6 +334,7 @@ def setup(app):
app.add_config_value("CDN_URL", "", True)
app.connect("source-read", global_replace)
+ set_announcement_message()
apply_patches()
calculate_pyodide_version(app)
ensure_typedoc_on_path()
| CSS broken in latest docs
## π Bug

It seems like the versionwarning banner is not displayed correctly in latest docs.
| 2023-02-02T01:49:47 |
||
pyodide/pyodide | 3,562 | pyodide__pyodide-3562 | [
"3561"
] | 68e42b1867a97d045392f4eaac25ea7e751ba859 | diff --git a/pyodide-build/pyodide_build/install_xbuildenv.py b/pyodide-build/pyodide_build/install_xbuildenv.py
--- a/pyodide-build/pyodide_build/install_xbuildenv.py
+++ b/pyodide-build/pyodide_build/install_xbuildenv.py
@@ -51,6 +51,7 @@ def install_xbuildenv(version: str, xbuildenv_path: Path) -> None:
[
"pip",
"install",
+ "--no-user",
"-t",
host_site_packages,
"-r",
| Error about `--user` and `--target` flag when installing xbuildenv
I sometimes get following error while installing xbuild environment:
```bash
$ pyodide build .
Downloading xbuild environment
Installing xbuild environment
stderr:
ERROR: Can not combine '--user' and '--target'
[notice] A new release of pip available: 22.3.1 -> 23.0
[notice] To update, run: /home/gitpod/.pyenv/versions/3.10.2/bin/python -m pip install --upgrade pip
```
It happens here, which installs host site packages:
https://github.com/pyodide/pyodide/blob/7cc1058358242a5a9012edbb8163d86a860a1a28/pyodide-build/pyodide_build/install_xbuildenv.py#L50-L57
I think we need to add `--no-user` flag explicitly to prevent this error.
| 2023-02-09T08:46:51 |
||
pyodide/pyodide | 3,580 | pyodide__pyodide-3580 | [
"3578"
] | 0fdf9a80fa33545794a1115bd5a793f390e11818 | diff --git a/src/py/_pyodide/_base.py b/src/py/_pyodide/_base.py
--- a/src/py/_pyodide/_base.py
+++ b/src/py/_pyodide/_base.py
@@ -298,6 +298,10 @@ def run(
``quiet_trailing_semicolon`` parameters to modify this default
behavior.
"""
+ if globals is None:
+ globals = {}
+ if locals is None:
+ locals = globals
if not self._compiled:
raise RuntimeError("Not yet compiled")
if self.code is None:
@@ -347,6 +351,10 @@ async def run_async(
``quiet_trailing_semicolon`` parameters to modify this default
behavior.
"""
+ if globals is None:
+ globals = {}
+ if locals is None:
+ locals = globals
if not self._compiled:
raise RuntimeError("Not yet compiled")
if self.code is None:
| diff --git a/src/tests/test_pyodide.py b/src/tests/test_pyodide.py
--- a/src/tests/test_pyodide.py
+++ b/src/tests/test_pyodide.py
@@ -197,6 +197,20 @@ def test_eval_code_locals():
eval_code("invalidate_caches()", globals, globals)
eval_code("invalidate_caches()", globals, locals)
+ # See https://github.com/pyodide/pyodide/issues/3578
+ with pytest.raises(NameError):
+ eval_code("print(self)")
+
+ res = eval_code(
+ """
+ var = "Hello"
+ def test():
+ return var
+ test()
+ """
+ )
+ assert res == "Hello"
+
def test_unpack_archive(selenium_standalone):
selenium = selenium_standalone
| Global namespace not accessible within a function
## π Bug
A globally declared variable cannot be read in a function:
```
globVar = "Hello"
def test():
print(globVar)
test()
```
gives error:
```
NameError: name 'globVar' is not defined
```
Interestingly, when used with the `global` keyword, it works:
```
globVar = "Hello"
def test():
global globVar
print(globVar)
test()
```
prints
```
Hello
```
This issue only happens when the python code is passed to the function `pyodide.pyodide_py.code.eval_code`. The bug is not present when the code is executed with `pyodide.runPython`, such as in the [console.html](https://pyodide.org/en/stable/console.html).
### To Reproduce
Run a slightly modified demo from the [Getting Started page](https://pyodide.org/en/stable/usage/quickstart.html) of pyodide:
```
<!doctype html>
<html>
<head>
<script src="https://cdn.jsdelivr.net/pyodide/v0.22.1/full/pyodide.js"></script>
</head>
<body>
Pyodide test page <br>
Open your browser console to see Pyodide output
<script type="text/javascript">
async function main(){
let pyodide = await loadPyodide();
console.log(pyodide.pyodide_py.code.eval_code(`
globVar = "Hello"
def test():
print(globVar)
test()
`));
}
main();
</script>
</body>
</html>
```
### Expected behavior
When the page above is opened in a web browser, it should print "Hello" to console. Instead, it throws a NameError: name 'globVar' is not defined:
```
pyodide.asm.js:10 Uncaught (in promise) PythonError: Traceback (most recent call last):
File "/lib/python3.10/_pyodide/_base.py", line 460, in eval_code
.run(globals, locals)
File "/lib/python3.10/_pyodide/_base.py", line 306, in run
coroutine = eval(self.code, globals, locals)
File "<exec>", line 6, in <module>
File "<exec>", line 4, in test
NameError: name 'globVar' is not defined
at new_error (pyodide.asm.js:10:180810)
at pyodide.asm.wasm:0xe78a8
at pyodide.asm.wasm:0xe79ad
at Module._pythonexc2js (pyodide.asm.js:10:929422)
at Module.callPyObjectKwargs (pyodide.asm.js:10:123563)
at Module.callPyObject (pyodide.asm.js:10:123772)
at PyProxyClass.apply (pyodide.asm.js:10:134742)
at Object.apply (pyodide.asm.js:10:133736)
at main (bug.html:12:45)
new_error @ pyodide.asm.js:10
$wrap_exception @ pyodide.asm.wasm:0xe78a8
$pythonexc2js @ pyodide.asm.wasm:0xe79ad
Module._pythonexc2js @ pyodide.asm.js:10
Module.callPyObjectKwargs @ pyodide.asm.js:10
Module.callPyObject @ pyodide.asm.js:10
apply @ pyodide.asm.js:10
apply @ pyodide.asm.js:10
main @ bug.html:12
await in main (async)
(anonymous) @ bug.html:20
```
### Environment
- Pyodide Version 0.22.1:
- Browser version : Version 110.0.5481.77 (Official Build) (64-bit)
Build from commit `5ef5622414f0fe8ab8c54694e04ab1b13939b7d4` (tagged `0.22.1`), but can be reproduced as well with the CDN build on https://cdn.jsdelivr.net/pyodide/v0.22.1/full/pyodide.js
### Additional context
The issue only happens when the code is invoked using the [eval_code](https://github.com/pyodide/pyodide/blob/main/src/py/_pyodide/_base.py#L362) function.
The documentation for the function assumes the `eval_code` can be called with `globals` argument set to `None`, and then an empty dictionary will be created. We relied on this feature since pyodide v0.17. In pyodide v0.18, the issue with global variables appeared and we tested and encountered the issue again in version v0.22.1. Is this behavior of hiding global variables a bug?
```
globals :
The global scope in which to execute code. This is used as the
``globals`` parameter for :py:func:`exec`. If ``globals`` is absent, a new
empty dictionary is used.
```
| It seems the JS API doesn't suffer from this problem, as the `runPython` never calls `eval_code` without a `globals` argument. If caller doesn't provide `globals`, the API draws some `API.globals` it has at hand, see: https://github.com/pyodide/pyodide/blob/main/src/js/api.ts#L74
Is there any reason why this is done this way? Is it correct if the caller relies on the `eval_code` function to create empty globals for him if parameter `globals` is absent?
I think this matches the behavior of normal eval/exec, which is the intended behavior.
I tested with Python 3.10.8, and the `exec` handles the code just right. Can this be a problem specific to pyodide?
```
$ python
Python 3.10.8 (main, Nov 1 2022, 14:18:21) [GCC 12.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> exec("""
... globVar = "Hello"
... def test():
... print(globVar)
...
... test()
... """)
Hello
```
Huh okay then it's definitely not the intended behavior. Thanks for the report!
I'm sorry I didn't read your original report carefully enough -- I thought `globVar` was defined outside of the exec'd string not inside of it.
It seems to depend on which file executes `eval`. Start with:
```py
cr = CodeRunner(
"""
globVar = "Hello"
def test():
print(globVar)
test()
""",
return_mode="none"
).compile()
```
Works:
```py
eval(cr.code, None, None)
```
Doesn't work:
```py
cr.run()
```
Doesn't work: change the definition in `_base.py` of `run()` to:
```py
def run(self, *args):
eval(self.code, None, None)
```
Does work: monkey patch `run` to be the definition above, but do the monkey patch in the current file.
Doesn't work: monkey patch `run` in a different file.
If `globals` is `None`, then `PyEval_GetGlobals` uses `PyEval_GetGlobals()` to determine the value of `globals`.
https://github.com/python/cpython/blob/main/Python/bltinmodule.c?plain=1#L920
This is sensitive to the file. I'm still not sure why it causes this problem though.
Minimal reproduction:
file1.py:
```py
def eval_code(code):
eval(code)
```
file2.py:
```py
from file1 import eval_code
code = compile(
"""
globVar = "Hello"
def test():
print(globVar)
test()
""", "<exec>", "exec", 0
)
eval_code(code)
```
Okay the problem is caused by the behavior of the locals argument. If you change it to `eval(code, None, globals())` it works.
Another symptom of this:
```py
from pyodide.code import eval_code
eval_code("print(self)")
```
prints `<_pyodide._base.CodeRunner object at 0xptr>`. | 2023-02-13T17:53:32 |
pyodide/pyodide | 3,592 | pyodide__pyodide-3592 | [
"3567"
] | 0d5bd851fe969c4ab62dd75384c255ac9ca35e5f | diff --git a/pyodide-build/pyodide_build/cli/xbuildenv.py b/pyodide-build/pyodide_build/cli/xbuildenv.py
new file mode 100644
--- /dev/null
+++ b/pyodide-build/pyodide_build/cli/xbuildenv.py
@@ -0,0 +1,32 @@
+from pathlib import Path
+
+import typer
+
+from ..install_xbuildenv import install
+
+app = typer.Typer(hidden=True)
+
+
[email protected]()
+def callback():
+ """
+ Create or install cross build environment
+ """
+
+
[email protected]("install") # type: ignore[misc]
+def _install(
+ path: Path = typer.Option(".pyodide-xbuildenv", help="path to xbuildenv directory"),
+ download: bool = typer.Option(False, help="download xbuildenv before installing"),
+ url: str = typer.Option(None, help="URL to download xbuildenv from"),
+) -> None:
+ """
+ Install xbuildenv.
+
+ The isntalled environment is the same as the one that would result from
+ `PYODIDE_PACKAGES='scipy' make` except that it is much faster.
+ The goal is to enable out-of-tree builds for binary packages that depend
+ on numpy or scipy.
+ Note: this is a private endpoint that should not be used outside of the Pyodide Makefile.
+ """
+ install(path, download=download, url=url)
diff --git a/pyodide-build/pyodide_build/install_xbuildenv.py b/pyodide-build/pyodide_build/install_xbuildenv.py
--- a/pyodide-build/pyodide_build/install_xbuildenv.py
+++ b/pyodide-build/pyodide_build/install_xbuildenv.py
@@ -1,11 +1,12 @@
import argparse
import json
+import os
import shutil
import subprocess
from pathlib import Path
from urllib.request import urlopen, urlretrieve
-from .common import exit_with_stdio, get_make_flag, get_pyodide_root
+from .common import exit_with_stdio, get_make_flag
from .create_pypa_index import create_pypa_index
from .logger import logger
@@ -20,19 +21,27 @@ def make_parser(parser: argparse.ArgumentParser) -> argparse.ArgumentParser:
"Note: this is a private endpoint that should not be used outside of the Pyodide Makefile."
)
parser.add_argument("--download", action="store_true", help="Download xbuild env")
+ parser.add_argument("--url", help="URL to download xbuild env from", default=None)
parser.add_argument("xbuildenv", type=str, nargs=1)
return parser
-def download_xbuildenv(version: str, xbuildenv_path: Path) -> None:
+def download_xbuildenv(
+ version: str, xbuildenv_path: Path, *, url: str | None = None
+) -> None:
from shutil import rmtree, unpack_archive
from tempfile import NamedTemporaryFile
logger.info("Downloading xbuild environment")
rmtree(xbuildenv_path, ignore_errors=True)
+
+ xbuildenv_url = (
+ url
+ or f"https://github.com/pyodide/pyodide/releases/download/{version}/xbuildenv-{version}.tar.bz2"
+ )
with NamedTemporaryFile(suffix=".tar") as f:
urlretrieve(
- f"https://github.com/pyodide/pyodide/releases/download/{version}/xbuildenv-{version}.tar.bz2",
+ xbuildenv_url,
f.name,
)
unpack_archive(f.name, xbuildenv_path)
@@ -40,12 +49,13 @@ def download_xbuildenv(version: str, xbuildenv_path: Path) -> None:
def install_xbuildenv(version: str, xbuildenv_path: Path) -> None:
logger.info("Installing xbuild environment")
+
xbuildenv_path = xbuildenv_path / "xbuildenv"
- pyodide_root = get_pyodide_root()
xbuildenv_root = xbuildenv_path / "pyodide-root"
- host_site_packages = xbuildenv_root / Path(
- get_make_flag("HOSTSITEPACKAGES")
- ).relative_to(pyodide_root)
+
+ os.environ["PYODIDE_ROOT"] = str(xbuildenv_root)
+
+ host_site_packages = Path(get_make_flag("HOSTSITEPACKAGES"))
host_site_packages.mkdir(exist_ok=True, parents=True)
result = subprocess.run(
[
@@ -68,8 +78,8 @@ def install_xbuildenv(version: str, xbuildenv_path: Path) -> None:
xbuildenv_path / "site-packages-extras", host_site_packages, dirs_exist_ok=True
)
cdn_base = f"https://cdn.jsdelivr.net/pyodide/v{version}/full/"
- if (xbuildenv_root / "repodata.json").exists():
- repodata_bytes = (xbuildenv_root / "repodata.json").read_bytes()
+ if (repodata_json := xbuildenv_root / "dist" / "repodata.json").exists():
+ repodata_bytes = repodata_json.read_bytes()
else:
repodata_url = cdn_base + "repodata.json"
with urlopen(repodata_url) as response:
@@ -79,11 +89,33 @@ def install_xbuildenv(version: str, xbuildenv_path: Path) -> None:
create_pypa_index(repodata["packages"], xbuildenv_root, cdn_base)
-def main(args: argparse.Namespace) -> None:
+def install(path: Path, *, download: bool = False, url: str | None = None) -> None:
+ """
+ Install cross-build environment.
+
+ Parameters
+ ----------
+ path
+ A path to the cross-build environment.
+ download
+ Whether to download the cross-build environment before installing it.
+ url
+ URL to download the cross-build environment from. This is only used
+ if `download` is True. The URL should point to a tarball containing
+ the cross-build environment. If not specified, the corresponding
+ release on GitHub is used.
+
+ Warning: if you are downloading from a version that is not the same
+ as the current version of pyodide-build, make sure that the cross-build
+ environment is compatible with the current version of Pyodide.
+ """
from . import __version__
- xbuildenv_path = Path(args.xbuildenv[0])
version = __version__
- if args.download:
- download_xbuildenv(version, xbuildenv_path)
- install_xbuildenv(version, xbuildenv_path)
+ if download:
+ download_xbuildenv(version, path, url=url)
+ install_xbuildenv(version, path)
+
+
+def main(args: argparse.Namespace) -> None:
+ install(Path(args.xbuildenv[0]), download=args.download, url=args.url)
| diff --git a/pyodide-build/pyodide_build/tests/fixture.py b/pyodide-build/pyodide_build/tests/fixture.py
--- a/pyodide-build/pyodide_build/tests/fixture.py
+++ b/pyodide-build/pyodide_build/tests/fixture.py
@@ -1,7 +1,12 @@
+import json
+import shutil
from pathlib import Path
+from typing import Any
import pytest
+from ..common import chdir
+
@pytest.fixture(scope="module")
def temp_python_lib(tmp_path_factory):
@@ -20,3 +25,53 @@ def temp_python_lib(tmp_path_factory):
(path / "hello_pyodide.py").write_text("def hello(): return 'hello'")
yield libdir
+
+
+def mock_repodata_json() -> dict[str, Any]:
+ # TODO: use pydantic
+
+ return {
+ "info": {
+ "version": "0.22.1",
+ },
+ "packages": {},
+ }
+
+
[email protected](scope="module")
+def temp_xbuildenv(tmp_path_factory):
+ """
+ Create a temporary xbuildenv archive
+ """
+ base = tmp_path_factory.mktemp("base")
+
+ path = Path(base)
+
+ xbuildenv = path / "xbuildenv"
+ xbuildenv.mkdir()
+
+ pyodide_root = xbuildenv / "pyodide-root"
+ site_packages_extra = xbuildenv / "site-packages-extras"
+ requirements_txt = xbuildenv / "requirements.txt"
+
+ pyodide_root.mkdir()
+ site_packages_extra.mkdir()
+ requirements_txt.touch()
+
+ (pyodide_root / "Makefile.envs").write_text(
+ """
+export HOSTSITEPACKAGES=$(PYODIDE_ROOT)/packages/.artifacts/lib/python$(PYMAJOR).$(PYMINOR)/site-packages
+
+.output_vars:
+ set
+"""
+ )
+ (pyodide_root / "dist").mkdir()
+ (pyodide_root / "dist" / "repodata.json").write_text(
+ json.dumps(mock_repodata_json())
+ )
+
+ with chdir(base):
+ archive_name = shutil.make_archive("xbuildenv", "tar")
+
+ yield base, archive_name
diff --git a/pyodide-build/pyodide_build/tests/test_cli.py b/pyodide-build/pyodide_build/tests/test_cli.py
--- a/pyodide-build/pyodide_build/tests/test_cli.py
+++ b/pyodide-build/pyodide_build/tests/test_cli.py
@@ -5,13 +5,21 @@
from pathlib import Path
import pytest
+from pytest_pyodide import spawn_web_server
import typer
from typer.testing import CliRunner # type: ignore[import]
from pyodide_build import common
-from pyodide_build.cli import build, build_recipes, config, create_zipfile, skeleton
+from pyodide_build.cli import (
+ build,
+ build_recipes,
+ config,
+ create_zipfile,
+ skeleton,
+ xbuildenv,
+)
-from .fixture import temp_python_lib
+from .fixture import temp_python_lib, temp_xbuildenv
only_node = pytest.mark.xfail_browsers(
chrome="node only", firefox="node only", safari="node only"
@@ -328,3 +336,30 @@ def test_create_zipfile_compile(temp_python_lib, tmp_path):
with ZipFile(output) as zf:
assert "module1.pyc" in zf.namelist()
assert "module2.pyc" in zf.namelist()
+
+
+def test_xbuildenv_install(tmp_path, temp_xbuildenv):
+ envpath = Path(tmp_path) / ".xbuildenv"
+
+ xbuildenv_url_base, xbuildenv_filename = temp_xbuildenv
+
+ with spawn_web_server(xbuildenv_url_base) as (hostname, port, _):
+ xbuildenv_url = f"http://{hostname}:{port}/{xbuildenv_filename}"
+ result = runner.invoke(
+ xbuildenv.app,
+ [
+ "install",
+ "--path",
+ str(envpath),
+ "--download",
+ "--url",
+ xbuildenv_url,
+ ],
+ )
+
+ assert result.exit_code == 0, result.stdout
+ assert "Downloading xbuild environment" in result.stdout, result.stdout
+ assert "Installing xbuild environment" in result.stdout, result.stdout
+ assert (envpath / "xbuildenv" / "pyodide-root").is_dir()
+ assert (envpath / "xbuildenv" / "site-packages-extras").is_dir()
+ assert (envpath / "xbuildenv" / "requirements.txt").exists()
| Option to install arbitrary version of xbuildenv for debug purpose
When testing out-of-tree builds with tot pyodide-build, it is sometimes annoying to create a cross-build environment.
So I think it would be nice to have the option to install an arbitrary version of cross-build env, such as:
```
pyodide install-xbuildenv --version x.x.x
```
This should be a private entry point only for maintainers/contributors, since installing an incompatible version of cross-build env will definitely break the build. Those who use this entry point should know what they are doing.
| 2023-02-17T00:51:58 |
|
pyodide/pyodide | 3,607 | pyodide__pyodide-3607 | [
"3565"
] | 01c00d70b4c817f9715facbfdef8ddb831839868 | diff --git a/pyodide-build/pyodide_build/buildall.py b/pyodide-build/pyodide_build/buildall.py
--- a/pyodide-build/pyodide_build/buildall.py
+++ b/pyodide-build/pyodide_build/buildall.py
@@ -427,6 +427,13 @@ def job_priority(pkg: BasePackage) -> int:
return 1
+def is_rust_package(pkg: BasePackage) -> bool:
+ """
+ Check if a package requires rust toolchain to build.
+ """
+ return any([q in pkg.executables_required for q in ("rustc", "cargo", "rustup")])
+
+
def format_name_list(l: list[str]) -> str:
"""
>>> format_name_list(["regex"])
@@ -538,14 +545,30 @@ def build_from_graph(
built_queue: Queue[BasePackage | Exception] = Queue()
thread_lock = Lock()
queue_idx = 1
+ building_rust_pkg = False
progress_formatter = ReplProgressFormatter(len(needs_build))
def builder(n: int) -> None:
- nonlocal queue_idx
+ nonlocal queue_idx, building_rust_pkg
while True:
- pkg = build_queue.get()[1]
+ _, pkg = build_queue.get()
with thread_lock:
+ if is_rust_package(pkg):
+ # Don't build multiple rust packages at the same time.
+ # See: https://github.com/pyodide/pyodide/issues/3565
+ # Note that if there are only rust packages left in the queue,
+ # this will keep pushing and popping packages until the current rust package
+ # is built. This is not ideal but presumably the overhead is negligible.
+ if building_rust_pkg:
+ build_queue.put((job_priority(pkg), pkg))
+
+ # Release the GIL so new packages get queued
+ sleep(0.1)
+ continue
+
+ building_rust_pkg = True
+
pkg._queue_idx = queue_idx
queue_idx += 1
@@ -569,6 +592,11 @@ def builder(n: int) -> None:
progress_formatter.remove_package(pkg_status)
built_queue.put(pkg)
+
+ with thread_lock:
+ if is_rust_package(pkg):
+ building_rust_pkg = False
+
# Release the GIL so new packages get queued
sleep(0.01)
| diff --git a/pyodide-build/pyodide_build/tests/test_buildall.py b/pyodide-build/pyodide_build/tests/test_buildall.py
--- a/pyodide-build/pyodide_build/tests/test_buildall.py
+++ b/pyodide-build/pyodide_build/tests/test_buildall.py
@@ -153,3 +153,28 @@ def test_requirements_executable(monkeypatch):
m.setattr(shutil, "which", lambda exe: "/bin")
buildall.generate_dependency_graph(RECIPE_DIR, {"pkg_test_executable"})
+
+
[email protected]("exe", ["rustc", "cargo", "rustup"])
+def test_is_rust_package(exe):
+
+ pkg = buildall.BasePackage(
+ pkgdir=Path(""),
+ name="test",
+ version="1.0.0",
+ disabled=False,
+ meta=None, # type: ignore[arg-type]
+ package_type="package",
+ run_dependencies=[],
+ host_dependencies=[],
+ host_dependents=set(),
+ dependencies=set(),
+ unbuilt_host_dependencies=set(),
+ executables_required=[exe],
+ )
+
+ assert buildall.is_rust_package(pkg)
+
+ pkg.executables_required = []
+
+ assert not buildall.is_rust_package(pkg)
| Issue in building rust-based packages in parallel
We are having some issue when building multiple rust-based packages in parallel.
For example, see: https://app.circleci.com/pipelines/github/pyodide/pyodide/5328/workflows/a0ced892-ac1a-4519-984c-d98a84fabe29/jobs/66900
It is expected behavior since `rustup` was not designed for concurrent use (https://github.com/rust-lang/rustup/issues/988).
Some possible workarounds that I can think of are:
1. Add some logic to not build rust-baed packages in parallel
2. Create a fresh local rust build environment everytime when building rust-based package. I found a Python package called [rustenv](https://github.com/chriskuehl/rustenv), which is designed for that purpose. It is a ~150LoC script, which basically installs rust every time from `https://sh.rustup.rs` into a local directory.
I slightly prefer option 1, as we don't have to be responsible for installing build requirements. But I have no strong opinion.
| +1 for option 1 as well. | 2023-02-24T01:41:58 |
pyodide/pyodide | 3,645 | pyodide__pyodide-3645 | [
"3436"
] | 7d6be15cb84c61cefd056130db4a518fb472a085 | diff --git a/pyodide-build/pyodide_build/pywasmcross.py b/pyodide-build/pyodide_build/pywasmcross.py
--- a/pyodide-build/pyodide_build/pywasmcross.py
+++ b/pyodide-build/pyodide_build/pywasmcross.py
@@ -415,14 +415,27 @@ def calculate_object_exports_nm(objects: list[str]) -> list[str]:
return result.stdout.splitlines()
+def filter_objects(line: list[str]) -> list[str]:
+ """
+ Collect up all the object files and archive files being linked.
+ """
+ return [
+ arg
+ for arg in line
+ if arg.endswith((".a", ".o"))
+ or arg.startswith(
+ "@"
+ ) # response file (https://gcc.gnu.org/wiki/Response_Files)
+ ]
+
+
def calculate_exports(line: list[str], export_all: bool) -> Iterable[str]:
"""
- Collect up all the object files and archive files being linked and list out
- symbols in them that are marked as public. If ``export_all`` is ``True``,
- then return all public symbols. If not, return only the public symbols that
- begin with `PyInit`.
+ List out symbols from object files and archive files that are marked as public.
+ If ``export_all`` is ``True``, then return all public symbols.
+ If not, return only the public symbols that begin with `PyInit`.
"""
- objects = [arg for arg in line if arg.endswith((".a", ".o"))]
+ objects = filter_objects(line)
exports = None
# Using emnm is simpler but it cannot handle bitcode. If we're only
# exporting the PyInit symbols, save effort by using nm.
| diff --git a/pyodide-build/pyodide_build/tests/test_pywasmcross.py b/pyodide-build/pyodide_build/tests/test_pywasmcross.py
--- a/pyodide-build/pyodide_build/tests/test_pywasmcross.py
+++ b/pyodide-build/pyodide_build/tests/test_pywasmcross.py
@@ -5,6 +5,7 @@
from pyodide_build.pywasmcross import (
BuildArgs,
calculate_exports,
+ filter_objects,
get_cmake_compiler_flags,
get_library_output,
handle_command_generate_args,
@@ -186,6 +187,33 @@ def test_conda_unsupported_args(build_args):
)
[email protected](
+ "line, expected",
+ [
+ ([], []),
+ (
+ [
+ "obj1.o",
+ "obj2.o",
+ "slib1.so",
+ "slib2.so",
+ "lib1.a",
+ "lib2.a",
+ "-o",
+ "test.so",
+ ],
+ ["obj1.o", "obj2.o", "lib1.a", "lib2.a"],
+ ),
+ (
+ ["@dir/link.txt", "obj1.o", "obj2.o", "test.so"],
+ ["@dir/link.txt", "obj1.o", "obj2.o"],
+ ),
+ ],
+)
+def test_filter_objects(line, expected):
+ assert filter_objects(line) == expected
+
+
@pytest.mark.xfail(reason="FIXME: emcc is not available during test")
def test_exports_node(tmp_path):
template = """
| Improve `pywasmcross` to handle response files
@ryanking13 mentioned [here](https://github.com/pyodide/pyodide/issues/3427#issuecomment-1374418990) that:
> pywasmcross calculates what symbols need to be exported by looking at objects files (.a, .o). But it doesn't handle response files (.rsp). So we need to add a logic to parse response files there.
I actually didn't even know what a response file was, so I just asked ChatGPT to do it for me, and here's what it says:
---
A response file is a file that contains a list of arguments that can be passed to a command line tool. For example, suppose we have a command line tool called "tool" that accepts a list of input files and processes them. Instead of typing out the long list of input files every time we want to run the tool, we can put the list of input files in a response file and pass the response file to the tool using the "@" symbol. For example:
```
tool @response_file.rsp
```
The contents of response_file.rsp might look something like this:
```
input_file1.txt
input_file2.txt
input_file3.txt
```
To handle response files in the `calculate_exports` function, we can modify the function to parse the response file and extract the list of arguments contained within it. Here is an example of how this could be done:
```py
def calculate_exports(line: list[str], export_all: bool) -> Iterable[str]:
"""
Collect up all the object files and archive files being linked and list out
symbols in them that are marked as public. If ``export_all`` is ``True``,
then return all public symbols. If not, return only the public symbols that
begin with `PyInit`.
"""
# Parse response files and extract list of arguments
objects = []
for arg in line:
if arg.endswith(".rsp"):
with open(arg, "r") as f:
objects.extend(f.read().splitlines())
else:
objects.append(arg)
# Filter objects to only include .a and .o files
objects = [obj for obj in objects if obj.endswith((".a", ".o"))]
exports = None
# Using emnm is simpler but it cannot handle bitcode. If we're only
# exporting the PyInit symbols, save effort by using nm.
if export_all:
exports = calculate_object_exports_readobj(objects)
if exports is None:
# Either export_all is false or we are linking at least one bitcode
# object. Fall back to a more conservative estimate of the symbols
# exported. This can export things with `__visibility__("hidden")`
exports = calculate_object_exports_nm(objects)
if export_all:
return exports
return (x for x in exports if x.startswith("PyInit"))
```
---
Seems simple enough? It just replaced [this line](https://github.com/pyodide/pyodide/blob/aeb066bbcef310a419bf058347bbc4d23ca52ec1/pyodide-build/pyodide_build/pywasmcross.py#L501):
```py
objects = [arg for arg in line if arg.endswith((".a", ".o"))]
```
with this:
```py
# Parse response files and extract list of arguments
objects = []
for arg in line:
if arg.endswith(".rsp"):
with open(arg, "r") as f:
objects.extend(f.read().splitlines())
else:
objects.append(arg)
# Filter objects to only include .a and .o files
objects = [obj for obj in objects if obj.endswith((".a", ".o"))]
```
| 2023-03-09T01:17:55 |
|
pyodide/pyodide | 3,658 | pyodide__pyodide-3658 | [
"3640"
] | 7d6be15cb84c61cefd056130db4a518fb472a085 | diff --git a/pyodide-build/pyodide_build/_f2c_fixes.py b/pyodide-build/pyodide_build/_f2c_fixes.py
--- a/pyodide-build/pyodide_build/_f2c_fixes.py
+++ b/pyodide-build/pyodide_build/_f2c_fixes.py
@@ -268,6 +268,21 @@ def fix_line(line: str) -> str:
"""
)
+ # In numpy 1.24 f2py changed its treatment of character argument. In
+ # particular it does not generate a ftnlen parameter for each
+ # character parameter but f2c still generates it. The following code
+ # removes unneeded ftnlen parameters from the f2ced signature. The
+ # problematic subroutines and parameters are the ones with a type character
+ # in scipy/sparse/linalg/_eigen/arpack/arpack.pyf.src
+ if "eupd.c" in str(f2c_output):
+ # put signature on a single line to make replacement more
+ # straightforward
+ regrouped_lines = regroup_lines(lines)
+ lines = [
+ re.sub(r",?\s*ftnlen\s*(howmny_len|bmat_len)", "", line)
+ for line in regrouped_lines
+ ]
+
with open(f2c_output, "w") as f:
f.writelines(lines)
| diff --git a/packages/scipy/test_scipy.py b/packages/scipy/test_scipy.py
--- a/packages/scipy/test_scipy.py
+++ b/packages/scipy/test_scipy.py
@@ -63,6 +63,7 @@ def runtest(module, filter):
runtest("odr", "explicit")
runtest("signal.tests.test_ltisys", "TestImpulse2")
runtest("stats.tests.test_multivariate", "haar")
+ runtest("sparse.linalg._eigen", "test_svds_parameter_k_which")
@pytest.mark.driver_timeout(40)
| New signature mismatch in `scipy.sparse.linalg.svds`
## π Bug
I am seeing new fatal errors in the scipy test suite in https://github.com/lesteve/scipy-tests-pyodide since a few days ago. They are likely related to the numpy version upgrade in #3593 although I have no idea why.
### To Reproduce
You can reproduce it with the [Pyodide alpha console](https://pyodide.org/en/latest/console.html) and the following snippet:
```py
import numpy as np
from scipy.sparse.linalg import svds
rng = np.random.default_rng(0)
A = rng.random((10, 10))
res = svds(A, k=3, which='LM', random_state=0)
```
```
Stack (most recent call first):
File "/lib/python3.11/site-packages/scipy/sparse/linalg/_eigen/arpack/arpack.py", line 578 in extract
File "/lib/python3.11/site-packages/scipy/sparse/linalg/_eigen/arpack/arpack.py", line 1691 in eigsh
File "/lib/python3.11/site-packages/scipy/sparse/linalg/_eigen/_svds.py", line 339 in svds
File "<console>", line 1 in <module>
File "/lib/python311.zip/_pyodide/_base.py", line 363 in run_async
File "/lib/python311.zip/pyodide/console.py", line 378 in runcode
File "/lib/python311.zip/pyodide/console.py", line 492 in runcode
File "/lib/python311.zip/asyncio/events.py", line 80 in _run
File "/lib/python311.zip/pyodide/webloop.py", line 319 in run_handle
Uncaught RuntimeError: null function or function signature mismatch
at f2py_rout__arpack_dseupd (000ffae6:0x5e92)
at fortran_call (0003ecf2:0x65ac)
at _PyCFunctionWithKeywords_TrampolineCall (pyodide.asm.js:1375:97)
at _PyObject_MakeTpCall (pyodide.asm.wasm:0x199bba)
at PyObject_Vectorcall (pyodide.asm.wasm:0x19a1a4)
at _PyEval_EvalFrameDefault (pyodide.asm.wasm:0x25f947)
at PyEval_EvalCode (pyodide.asm.wasm:0x255ed8)
at builtin_eval (pyodide.asm.wasm:0x2531d6)
at _PyEval_EvalFrameDefault (pyodide.asm.wasm:0x25e78b)
at gen_send_ex2 (pyodide.asm.wasm:0x1adc62)
```
Debugging a bit more, the function is called with 23 arguments:
```
call_indirect (param i32 i32 i32 i32 i32 i32 i32 i32 i32 i32 i32 i32 i32 i32 i32 i32 i32 i32 i32 i32 i32 i32 i32) (result i32)
```
but has 25 arguments:
```
(func $dseupd_ (;219;) (export "dseupd_") (param $var0 i32) (param $var1 i32) (param $var2 i32) (param $var3 i32) (param $var4 i32) (param $var5 i32) (param $var6 i32) (param $var7 i32) (param $var8 i32) (param $var9 i32) (param $var10 i32) (param $var11 i32) (param $var12 i32) (param $var13 i32) (param $var14 i32) (param $var15 i32) (param $var16 i32) (param $var17 i32) (param $var18 i32) (param $var19 i32) (param $var20 i32) (param $var21 i32) (param $var22 i32) (param $var23 i32) (param $var24 i32) (result i32)
```
### Expected behavior
No fatal error
### Environment
- Pyodide Version<!-- (e.g. 1.8.1) -->: alpha
| Thanks for the report @lesteve! Too bad there isn't an "I hate this" react...
Here is the summary of my investigation:
- very likely due to https://github.com/numpy/numpy/pull/19388 there has been changes in f2py in numpy 1.24
- Here are is an excerpt of the diff on the arpackmodule:
```
diff -u /tmp/{working,problematic}/src.emscripten_3_1_32_wasm32-3.11/scipy/sparse/linalg/_eigen/arpack
/_arpackmodule.c
```
```diff
@@ -1110,10 +1165,10 @@
d,z,info = sseupd(rvec,howmny,select,sigma,bmat,which,nev,tol,resid,v,iparam,ipntr,workd,workl,info,[ldz,n,ncv,ldv,lworkl])\n\nWrapper for ``sseupd``.\
\n\nParameters\n----------\n"
"rvec : input int\n"
-"howmny : input string(len=1)\n"
+"howmny : input bytes\n"
"select : input rank-1 array('i') with bounds (ncv)\n"
"sigma : input float\n"
-"bmat : input string(len=1)\n"
+"bmat : input bytes\n"
"which : input string(len=2)\n"
"nev : input int\n"
"tol : input float\n"
@@ -1134,42 +1189,40 @@
```
```diff
@@ -1405,7 +1472,7 @@
#endif
/*callfortranroutine*/
Py_BEGIN_ALLOW_THREADS
- (*f2py_func)(&rvec,howmny,select,d,z,&ldz,&sigma,bmat,&n,which,&nev,&tol,resid,&ncv,v,&ldv,iparam,ipntr,workd,workl,&lworkl,&info,slen(howmny),slen(bmat),slen(which));
+ (*f2py_func)(&rvec,&howmny,select,d,z,&ldz,&sigma,&bmat,&n,which,&nev,&tol,resid,&ncv,v,&ldv,iparam,ipntr,workd,workl,&lworkl,&info,slen(which));
Py_END_ALLOW_THREADS
if (PyErr_Occurred())
f2py_success = 0;
#+end_src
```
So `howmny` and `bmat` have changed types from `string(len=1)` to `bytes`. The function is called is taking 23 arguments with the f2py changes instead of 25. The previous (numpy 1.23.5) behaviour has additional arguments with `slen(howmny)` and `slen(bmat)`.
The thing is that the f2c'ed dseupd.f still has the old signature with 25 arguments
```
packages/scipy/build/scipy-1.9.3/scipy/sparse/linalg/_eigen/arpack/ARPACK/SRC/dseupd.c
```
```c
/* Subroutine */ int dseupd_(logical *rvec, char *howmny, logical *select,
doublereal *d__, doublereal *z__, integer *ldz, doublereal *sigma,
char *bmat, integer *n, char *which, integer *nev, doublereal *tol,
doublereal *resid, integer *ncv, doublereal *v, integer *ldv, integer
,*iparam, integer *ipntr, doublereal *workd, doublereal *workl,
integer *lworkl, integer *info, ftnlen howmny_len, ftnlen bmat_len,
ftnlen which_len)
```
About the possible way forward, there may be a few different possibilities:
- patch `scipy/sparse/linalg/_eigen/arpack/arpack.pyf.src` and use `character*1` rather than `character` everywhere. This seems to work locally
- in `pyodide-build/pyodide_build/_f2c_fixes.py` change the signature somehow, it does not seem like the `_len` variables are used in the function so maybe that could work
- patch somehow the call site adding 1 for the length of `howmny` and `bmat`, not sure how easy this is
- something else?
I am not sure whether this is worth reporting to numpy since `f2py` and `f2c` don't generate the same signature, which may be an issue for us in other places ... I am not sure also how to put together a smaller reproducer to explain our issue with not too much code ...
I think this problem is created in `_f2c_fixes.py`, quite possibly in the function `char1_args_to_int`.
The issue is that if you apply `character*1`, it produces an `ftnlen` argument at the end. So if you had:
```
f(a, b, c)
character *1 a, character*2 b, real c
```
then f2c makes a C function like:
```C
f(char* a, char* b, real* c, ftnlen a_len, ftnlen b_len)
```
but clapack and probably openblas have hand modified these functions to remove the ftnlen arguments, since they are kind of wrong for fixed length strings -- we don't need to pass a length, it's always 1 (respectively 2). But then we're f2cing other code that tries to call f, and it generates a signature like `f(char* a, char* b, real* c, ftnlen a_len, ftnlen b_len)` which is 5 i32 args and the actual function has 3 i32 args so there's a problem. So in `_f2c_fixes.py` there's automated tooling to remove these `ftnlen` args. But if the function being called was not defined in clapack, then it will have these `ftnlen` args so it needs to be left alone.
Thanks for the details, now I understand a bit more the reason for the char to int replacement!
In this case though the Fortran code is in scipy (in `scipy/sparse/linalg/_eigen/arpack/ARPACK/SRC/dseupd.f` so not related to BLAS or LAPACK) and is `f2c`ed by Pyodide right? So the change in `f2py` in numpy 1.24 breaks the consistency between the `f2py`ed code and the `f2c`ed code (maybe for good reasons ...).
I tried to follow your advice and replace `character` by `int` in `dseupd.f` code in `fix_f2c_input`, but the thing is that you have some code like `if (bmat .ne. 'I' .and. bmat .ne. 'G')` which fails to compile, so I guess that you will need to replace this by their `ord` equivalent as well. I tried to do this, this compiles but I get a runtime error that I need to investigate (no signature mismatch so this goes a step further). The Python traceback is:
```
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/lib/python3.11/site-packages/scipy/sparse/linalg/_eigen/_svds.py", line 339
, in svds
_, eigvec = eigsh(XH_X, k=k, tol=tol ** 2, maxiter=maxiter,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/lib/python3.11/site-packages/scipy/sparse/linalg/_eigen/arpack/arpack.py",
line 1691, in eigsh
return params.extract(return_eigenvectors)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/lib/python3.11/site-packages/scipy/sparse/linalg/_eigen/arpack/arpack.py",
line 585, in extract
raise ArpackError(ierr, infodict=self.extract_infodict)
scipy.sparse.linalg._eigen.arpack.arpack.ArpackError: ARPACK error -15: HOWMNY must
be one of 'A' or 'S' if RVEC = .true.
```
Somehow it feels slightly easier to patch the `pyf` to use `character*1` instead of `character` and avoid the `f2py` changes in numpy 1.24 seems but maybe I am missing something :wink:
> it feels slightly easier to
Not sure. I encourage you to try anything that you think might work. My patches are all "this is the first thing I tried that worked" I am not sure that they are the best possible approach for the cases they cover nor for any new cases.
> it feels slightly easier to
Not sure. I encourage you to try anything that you think might work. My patches are all "this is the first thing I tried that worked" I am not sure that they are the best possible approach for the cases they cover nor for any new cases. | 2023-03-15T15:37:37 |
pyodide/pyodide | 3,684 | pyodide__pyodide-3684 | [
"3682"
] | bd2377444d614957c0b1b160fdbad339afb46676 | diff --git a/pyodide-build/pyodide_build/cli/skeleton.py b/pyodide-build/pyodide_build/cli/skeleton.py
--- a/pyodide-build/pyodide_build/cli/skeleton.py
+++ b/pyodide-build/pyodide_build/cli/skeleton.py
@@ -1,11 +1,13 @@
# Create or update a package recipe skeleton,
# inspired from `conda skeleton` command.
+import sys
from pathlib import Path
import typer
from .. import common, mkpkg
+from ..logger import logger
app = typer.Typer()
@@ -55,12 +57,21 @@ def new_recipe_pypi(
recipe_dir_ = pyodide_root / "packages" if not recipe_dir else Path(recipe_dir)
if update or update_patched:
- mkpkg.update_package(
- recipe_dir_,
- name,
- version,
- source_fmt=source_format, # type: ignore[arg-type]
- update_patched=update_patched,
- )
+ try:
+ mkpkg.update_package(
+ recipe_dir_,
+ name,
+ version,
+ source_fmt=source_format, # type: ignore[arg-type]
+ update_patched=update_patched,
+ )
+ except mkpkg.MkpkgFailedException as e:
+ logger.error(f"{name} update failed: {e}")
+ sys.exit(1)
+ except mkpkg.MkpkgSkipped as e:
+ logger.warn(f"{name} update skipped: {e}")
+ except Exception:
+ print(name)
+ raise
else:
mkpkg.make_package(recipe_dir_, name, version, source_fmt=source_format) # type: ignore[arg-type]
diff --git a/pyodide-build/pyodide_build/mkpkg.py b/pyodide-build/pyodide_build/mkpkg.py
--- a/pyodide-build/pyodide_build/mkpkg.py
+++ b/pyodide-build/pyodide_build/mkpkg.py
@@ -46,6 +46,10 @@ class MetadataDict(TypedDict):
vulnerabilities: list[Any]
+class MkpkgSkipped(Exception):
+ pass
+
+
class MkpkgFailedException(Exception):
pass
@@ -224,12 +228,13 @@ def update_package(
yaml_content = yaml.load(meta_path.read_bytes())
- if "url" not in yaml_content["source"]:
- raise MkpkgFailedException(f"Skipping: {package} is a local package!")
-
build_info = yaml_content.get("build", {})
- if build_info.get("library", False) or build_info.get("sharedlibrary", False):
- raise MkpkgFailedException(f"Skipping: {package} is a library!")
+ ty = build_info.get("type", None)
+ if ty in ["static_library", "shared_library", "cpython_module"]:
+ raise MkpkgSkipped(f"{package} is a {ty.replace('_', ' ')}!")
+
+ if "url" not in yaml_content["source"]:
+ raise MkpkgSkipped(f"{package} is a local package!")
if yaml_content["source"]["url"].endswith("whl"):
old_fmt = "wheel"
| pyodide-build shouldn't display tracebacks for "user error" / expected failures
## π Bug
pyodide-build has been updated to display rich tracebacks. This is useful only if the error was actually unexpected. In many places `pyodide-build` exits by raising an error which should just print the message and set a nonzero status code but not display some giant traceback message.
| 2023-03-22T16:38:11 |
||
pyodide/pyodide | 3,727 | pyodide__pyodide-3727 | [
"3717"
] | 48cf29f75d229a8e2ca44a9e53ee9ab5341cce15 | diff --git a/pyodide-build/pyodide_build/_py_compile.py b/pyodide-build/pyodide_build/_py_compile.py
--- a/pyodide-build/pyodide_build/_py_compile.py
+++ b/pyodide-build/pyodide_build/_py_compile.py
@@ -212,7 +212,13 @@ def _py_compile_archive(
return None
path_out = input_path.parent / name_out
- _compile(input_path, path_out, keep=keep, verbose=verbose)
+ _compile(
+ input_path,
+ path_out,
+ keep=keep,
+ verbose=verbose,
+ compression_level=compression_level,
+ )
return path_out
| diff --git a/pyodide-build/pyodide_build/tests/test_cli.py b/pyodide-build/pyodide_build/tests/test_cli.py
--- a/pyodide-build/pyodide_build/tests/test_cli.py
+++ b/pyodide-build/pyodide_build/tests/test_cli.py
@@ -6,6 +6,7 @@
import pytest
from pytest_pyodide import spawn_web_server
+import zipfile
import typer
from typer.testing import CliRunner # type: ignore[import]
@@ -17,6 +18,7 @@
create_zipfile,
skeleton,
xbuildenv,
+ py_compile,
)
from .fixture import temp_python_lib, temp_python_lib2, temp_xbuildenv
@@ -367,3 +369,25 @@ def test_xbuildenv_install(tmp_path, temp_xbuildenv):
assert (envpath / "xbuildenv" / "pyodide-root").is_dir()
assert (envpath / "xbuildenv" / "site-packages-extras").is_dir()
assert (envpath / "xbuildenv" / "requirements.txt").exists()
+
+
[email protected]("target", ["dir", "file"])
[email protected]("compression_level", [0, 6])
+def test_py_compile(tmp_path, target, compression_level):
+ wheel_path = tmp_path / "python.zip"
+ with zipfile.ZipFile(wheel_path, "w", compresslevel=3) as zf:
+ zf.writestr("a1.py", "def f():\n pass")
+
+ if target == "dir":
+ target_path = tmp_path
+ elif target == "file":
+ target_path = wheel_path
+
+ py_compile.main(
+ path=target_path, silent=False, keep=False, compression_level=compression_level
+ )
+ with zipfile.ZipFile(tmp_path / "python.zip", "r") as fh:
+ if compression_level > 0:
+ assert fh.filelist[0].compress_type == zipfile.ZIP_DEFLATED
+ else:
+ assert fh.filelist[0].compress_type == zipfile.ZIP_STORED
| Should detect when a PyProxy is used with GIL released
## π Bug
The following code attempts to use a PyProxy with the GIL released.
```js
pyodide = await loadPyodide({stdout: (msg) => pyodide.globals.get("x")});
pyodide.runPython("print(1)")
```
Chaos ensues:

### Expected behavior
Throw a reasonable no-gil error. Don't fatally fail.
| 2023-04-01T18:58:38 |
|
pyodide/pyodide | 3,804 | pyodide__pyodide-3804 | [
"3770"
] | 14feead2fe7b41123ca9e72e6e5fd46d9e408593 | diff --git a/pyodide-build/pyodide_build/buildpkg.py b/pyodide-build/pyodide_build/buildpkg.py
--- a/pyodide-build/pyodide_build/buildpkg.py
+++ b/pyodide-build/pyodide_build/buildpkg.py
@@ -426,13 +426,7 @@ def compile(
from .pypabuild import build
outpath = srcpath / "dist"
- try:
- build(srcpath, outpath, build_env, backend_flags)
- except BaseException:
- build_log_path = Path("build.log")
- if build_log_path.exists():
- build_log_path.unlink()
- raise
+ build(srcpath, outpath, build_env, backend_flags)
def replace_so_abi_tags(wheel_dir: Path) -> None:
| No logs when erroring on build package
I'm trying to build a package with Cyhton sources with the "in tree" approach.
When running
```
pyodide build-recipes sisl --install
```
I get
```
The following packages are already built: boost-cpp, distutils, hashlib, libf2c, liblzma, lzma, micropip, numpy, openblas, openssl, packaging, pydecimal, pydoc_data, pyparsing,
scipy, setuptools, sqlite3, ssl, test, and tqdm
Building the following packages: sisl
[1/1] (thread 1) building sisl β
Building packages... ββββββββββββββββββββββββββββββββββββββββ 0/1 0% Time elapsed: 0:00:09Error building sisl. Printing build logs.
ERROR: No build log found.
ERROR: cancelling buildall
[1/1] (thread 1) building sisl β Ή
Building packages... ββββββββββββββββββββββββββββββββββββββββ 0/1 0% Time elapsed: 0:00:09
```
Therefore, I have no clue what's wrong. When does an error without build logs happen? Can I get some more verbose output in some way? Thanks!
I'm running this on the Docker container, after succesfully building all packages, even an old version of the package I'm trying to build.
| If you use the old `pyodide-build buildpkg packages/sisl/meta.yaml` then that is more verbose. I think it is getting removed though.
Thank you! That helped! I was able to spot the error, although now I have another one :sweat_smile:
It's strange because with `pyodide-build buildpkg` I am seeing that `scikit-build` is already building the sources with `cmake`, but if I run `pyodide build-recipes` I get no logs.
@ryanking13 what's the equivalent of `buildpkg` in the new cli?
> @ryanking13 what's the equivalent of buildpkg in the new cli?
Try adding `--no-deps` parameter. This will print build logs to stdout.
```
pyodide build-recipes sisl --no-deps
```
Nice, that works. Thanks!
I'm not sure this issue should be closed though. Even without `--no-deps` it should probably store the build log to a file. Also, isn't it kind of illogical that `--no-deps` makes it print to stdout? You would think we would handle that with a verbosity flag which would be directly discoverable.
Yes, I definitely agree that a verbosity flag would be nice and it is very strange that `--no-deps` writes logs that otherwise are not written.
I closed it because the workaround was enough for me. Feel free to open again :)
Yes, the behavior of `--no-deps` is not very straightforward, as it actually calls a different API. When making `build-recipes` CLI, I tried to reuse existing APIs (buildall and buildpkg) as much as possible so for now the behavior is not very ideal. I think we need to find a way to merge these two APIs for consistency.
Anyway, I think the point of this issue is not about that, so I think we can discuss it in a separate issue. | 2023-04-30T12:15:29 |
|
pyodide/pyodide | 3,811 | pyodide__pyodide-3811 | [
"3806"
] | 4777f5920059ac97bc159bc6c5c725dbca2703a6 | diff --git a/pyodide-build/pyodide_build/cli/build.py b/pyodide-build/pyodide_build/cli/build.py
--- a/pyodide-build/pyodide_build/cli/build.py
+++ b/pyodide-build/pyodide_build/cli/build.py
@@ -1,5 +1,6 @@
import re
import shutil
+import sys
import tempfile
from pathlib import Path
from typing import Optional
@@ -113,9 +114,12 @@ def main(
help="Build source, can be source folder, pypi version specification, or url to a source dist archive or wheel file. If this is blank, it will build the current directory.",
),
output_directory: str = typer.Option(
- "./dist",
+ "",
+ "--outdir",
+ "-o",
help="which directory should the output be placed into?",
),
+ output_directory_compat: str = typer.Option("", "--output-directory", hidden=True),
requirements_txt: str = typer.Option(
"",
"--requirements",
@@ -143,6 +147,16 @@ def main(
ctx: typer.Context = typer.Context,
) -> None:
"""Use pypa/build to build a Python package from source, pypi or url."""
+ if output_directory_compat:
+ print(
+ "--output-directory is deprecated, use --outdir or -o instead",
+ file=sys.stderr,
+ )
+ if output_directory_compat and output_directory:
+ print("Cannot provide both --outdir and --output-directory", file=sys.stderr)
+ sys.exit(1)
+ output_directory = output_directory_compat or output_directory or "./dist"
+
outpath = Path(output_directory).resolve()
outpath.mkdir(exist_ok=True)
extras: list[str] = []
| diff --git a/pyodide-build/pyodide_build/tests/test_cli.py b/pyodide-build/pyodide_build/tests/test_cli.py
--- a/pyodide-build/pyodide_build/tests/test_cli.py
+++ b/pyodide-build/pyodide_build/tests/test_cli.py
@@ -444,9 +444,7 @@ def mocked_build(srcdir: Path, outdir: Path, env: Any, backend_flags: Any) -> st
srcdir.mkdir()
app = typer.Typer()
app.command(**build.main.typer_kwargs)(build.main) # type:ignore[attr-defined]
- result = runner.invoke(
- app, [str(srcdir), "--output-directory", str(outdir), "x", "y", "z"]
- )
+ result = runner.invoke(app, [str(srcdir), "--outdir", str(outdir), "x", "y", "z"])
print(result)
print(result.stdout)
assert result.exit_code == 0
| Allow specifying output directory in pyodide build with -o or --outdir
I added the capability to specify output directory in pyodide build with `--output-directory`. But here `pypa/build` accepts either `--outdir` or `-o`. We should accept these as ways to specify the output directory for consistency
| Can't we use the same parameter name as pypa/build? Or would that be a backward compatible change now? Maybe with a deprecation cycle..
I would say:
1. Add "correct" pypa/build names in addition to the "incorrect" one I used
2. remove "incorrect" name after a few versions
But we could remove it sooner if you strongly prefer. It just would make cibuildwheel add some version logic to decide which flags to pass.
It's certainly not urgent, as long as keep a single correct flag in the end. So a deprecation then removal after a few versions sounds good. | 2023-05-01T03:25:47 |
pyodide/pyodide | 3,815 | pyodide__pyodide-3815 | [
"3770"
] | b526d093ddaf686ccc30d1c93d680c5561a58e40 | diff --git a/pyodide-build/pyodide_build/buildpkg.py b/pyodide-build/pyodide_build/buildpkg.py
--- a/pyodide-build/pyodide_build/buildpkg.py
+++ b/pyodide-build/pyodide_build/buildpkg.py
@@ -426,13 +426,7 @@ def compile(
from .pypabuild import build
outpath = srcpath / "dist"
- try:
- build(srcpath, outpath, build_env, backend_flags)
- except BaseException:
- build_log_path = Path("build.log")
- if build_log_path.exists():
- build_log_path.unlink()
- raise
+ build(srcpath, outpath, build_env, backend_flags)
def replace_so_abi_tags(wheel_dir: Path) -> None:
diff --git a/pyodide-build/pyodide_build/cli/build.py b/pyodide-build/pyodide_build/cli/build.py
--- a/pyodide-build/pyodide_build/cli/build.py
+++ b/pyodide-build/pyodide_build/cli/build.py
@@ -1,5 +1,6 @@
import re
import shutil
+import sys
import tempfile
from pathlib import Path
from typing import Optional
@@ -113,9 +114,12 @@ def main(
help="Build source, can be source folder, pypi version specification, or url to a source dist archive or wheel file. If this is blank, it will build the current directory.",
),
output_directory: str = typer.Option(
- "./dist",
+ "",
+ "--outdir",
+ "-o",
help="which directory should the output be placed into?",
),
+ output_directory_compat: str = typer.Option("", "--output-directory", hidden=True),
requirements_txt: str = typer.Option(
"",
"--requirements",
@@ -143,6 +147,16 @@ def main(
ctx: typer.Context = typer.Context,
) -> None:
"""Use pypa/build to build a Python package from source, pypi or url."""
+ if output_directory_compat:
+ print(
+ "--output-directory is deprecated, use --outdir or -o instead",
+ file=sys.stderr,
+ )
+ if output_directory_compat and output_directory:
+ print("Cannot provide both --outdir and --output-directory", file=sys.stderr)
+ sys.exit(1)
+ output_directory = output_directory_compat or output_directory or "./dist"
+
outpath = Path(output_directory).resolve()
outpath.mkdir(exist_ok=True)
extras: list[str] = []
| diff --git a/pyodide-build/pyodide_build/tests/test_cli.py b/pyodide-build/pyodide_build/tests/test_cli.py
--- a/pyodide-build/pyodide_build/tests/test_cli.py
+++ b/pyodide-build/pyodide_build/tests/test_cli.py
@@ -411,9 +411,7 @@ def mocked_build(srcdir: Path, outdir: Path, env: Any, backend_flags: Any) -> st
srcdir.mkdir()
app = typer.Typer()
app.command(**build.main.typer_kwargs)(build.main) # type:ignore[attr-defined]
- result = runner.invoke(
- app, [str(srcdir), "--output-directory", str(outdir), "x", "y", "z"]
- )
+ result = runner.invoke(app, [str(srcdir), "--outdir", str(outdir), "x", "y", "z"])
print(result)
print(result.stdout)
assert result.exit_code == 0
diff --git a/src/tests/test_cmdline_runner.py b/src/tests/test_cmdline_runner.py
--- a/src/tests/test_cmdline_runner.py
+++ b/src/tests/test_cmdline_runner.py
@@ -145,6 +145,32 @@ def test_invalid_cmdline_option(selenium):
)
+@only_node
+def test_extra_mounts(selenium, tmp_path, monkeypatch):
+ dir_a = tmp_path / "a"
+ dir_b = tmp_path / "b"
+ dir_a.mkdir()
+ dir_b.mkdir()
+
+ tmp_path_a = dir_a / "script.py"
+ tmp_path_b = dir_b / "script.py"
+ tmp_path_a.write_text("print('hello 1')")
+ tmp_path_b.write_text("print('hello 2')")
+ monkeypatch.setenv("_PYODIDE_EXTRA_MOUNTS", f"{dir_a}:{dir_b}")
+ result = subprocess.run(
+ [script_path, tmp_path_a], capture_output=True, encoding="utf8"
+ )
+ assert result.returncode == 0
+ assert result.stdout == "hello 1\n"
+ assert result.stderr == ""
+ result = subprocess.run(
+ [script_path, tmp_path_b], capture_output=True, encoding="utf8"
+ )
+ assert result.returncode == 0
+ assert result.stdout == "hello 2\n"
+ assert result.stderr == ""
+
+
@contextmanager
def venv_ctxmgr(path):
check_emscripten()
| No logs when erroring on build package
I'm trying to build a package with Cyhton sources with the "in tree" approach.
When running
```
pyodide build-recipes sisl --install
```
I get
```
The following packages are already built: boost-cpp, distutils, hashlib, libf2c, liblzma, lzma, micropip, numpy, openblas, openssl, packaging, pydecimal, pydoc_data, pyparsing,
scipy, setuptools, sqlite3, ssl, test, and tqdm
Building the following packages: sisl
[1/1] (thread 1) building sisl β
Building packages... ββββββββββββββββββββββββββββββββββββββββ 0/1 0% Time elapsed: 0:00:09Error building sisl. Printing build logs.
ERROR: No build log found.
ERROR: cancelling buildall
[1/1] (thread 1) building sisl β Ή
Building packages... ββββββββββββββββββββββββββββββββββββββββ 0/1 0% Time elapsed: 0:00:09
```
Therefore, I have no clue what's wrong. When does an error without build logs happen? Can I get some more verbose output in some way? Thanks!
I'm running this on the Docker container, after succesfully building all packages, even an old version of the package I'm trying to build.
| 2023-05-01T18:30:17 |
|
pyodide/pyodide | 3,853 | pyodide__pyodide-3853 | [
"3849"
] | 0fe04cd97d9c808a9d77335a630faf371f7ec200 | diff --git a/docs/sphinx_pyodide/sphinx_pyodide/mdn_xrefs.py b/docs/sphinx_pyodide/sphinx_pyodide/mdn_xrefs.py
--- a/docs/sphinx_pyodide/sphinx_pyodide/mdn_xrefs.py
+++ b/docs/sphinx_pyodide/sphinx_pyodide/mdn_xrefs.py
@@ -55,6 +55,25 @@
"Function.apply": "$global/",
"Function.bind": "$global/",
"Function.call": "$global/",
+ "Array.join": "$global/",
+ "Array.slice": "$global/",
+ "Array.lastIndexOf": "$global/",
+ "Array.indexOf": "$global/",
+ "Array.forEach": "$global/",
+ "Array.map": "$global/",
+ "Array.filter": "$global/",
+ "Array.reduce": "$global/",
+ "Array.reduceRight": "$global/",
+ "Array.some": "$global/",
+ "Array.every": "$global/",
+ "Array.at": "$global/",
+ "Array.concat": "$global/",
+ "Array.includes": "$global/",
+ "Array.entries": "$global/",
+ "Array.keys": "$global/",
+ "Array.values": "$global/",
+ "Array.find": "$global/",
+ "Array.findIndex": "$global/",
},
"js:data": {
"Iterable": "$reference/Iteration_protocols#the_iterable_protocol",
| diff --git a/src/tests/test_pyproxy.py b/src/tests/test_pyproxy.py
--- a/src/tests/test_pyproxy.py
+++ b/src/tests/test_pyproxy.py
@@ -2,7 +2,7 @@
import time
import pytest
-from pytest_pyodide import run_in_pyodide
+from pytest_pyodide.decorator import run_in_pyodide
def test_pyproxy_class(selenium):
@@ -1578,3 +1578,521 @@ async def test_multiple_interpreters(selenium):
d1 = {"a": 2}
d2 = py2.runPython(str(d1))
assert d2.toJs().to_py() == d1
+
+
+@run_in_pyodide
+def test_pyproxy_of_list_index(selenium):
+ from pyodide.code import run_js
+
+ pylist = [9, 8, 7]
+ jslist = run_js(
+ """
+ (p) => {
+ return [p[0], p[1], p[2]]
+ }
+ """
+ )(pylist)
+ assert jslist.to_py() == pylist
+
+
+@run_in_pyodide
+def test_pyproxy_of_list_join(selenium):
+ from pyodide.code import run_js
+ from pyodide.ffi import to_js
+
+ a = ["Wind", "Water", "Fire"]
+ ajs = to_js(a)
+ func = run_js("((a, k) => a.join(k))")
+
+ assert func(a, None) == func(ajs, None)
+ assert func(a, ", ") == func(ajs, ", ")
+ assert func(a, " ") == func(ajs, " ")
+
+
+@run_in_pyodide
+def test_pyproxy_of_list_slice(selenium):
+ from pyodide.code import run_js
+ from pyodide.ffi import to_js
+
+ a = ["ant", "bison", "camel", "duck", "elephant"]
+ ajs = to_js(a)
+
+ func_strs = [
+ "a.slice(2)",
+ "a.slice(2, 4)",
+ "a.slice(1, 5)",
+ "a.slice(-2)",
+ "a.slice(2, -1)",
+ "a.slice()",
+ ]
+ for func_str in func_strs:
+ func = run_js(f"(a) => {func_str}")
+ assert func(a).to_py() == func(ajs).to_py()
+
+
+@run_in_pyodide
+def test_pyproxy_of_list_indexOf(selenium):
+ from pyodide.code import run_js
+ from pyodide.ffi import to_js
+
+ a = ["ant", "bison", "camel", "duck", "bison"]
+ ajs = to_js(a)
+
+ func_strs = [
+ "beasts.indexOf('bison')",
+ "beasts.indexOf('bison', 2)",
+ "beasts.indexOf('bison', -4)",
+ "beasts.indexOf('bison', 3)",
+ "beasts.indexOf('giraffe')",
+ ]
+ for func_str in func_strs:
+ func = run_js(f"(beasts) => {func_str}")
+ assert func(a) == func(ajs)
+
+
+@run_in_pyodide
+def test_pyproxy_of_list_lastIndexOf(selenium):
+ from pyodide.code import run_js
+ from pyodide.ffi import to_js
+
+ a = ["ant", "bison", "camel", "duck", "bison"]
+ ajs = to_js(a)
+
+ func_strs = [
+ "beasts.lastIndexOf('bison')",
+ "beasts.lastIndexOf('bison', 2)",
+ "beasts.lastIndexOf('bison', -4)",
+ "beasts.lastIndexOf('bison', 3)",
+ "beasts.lastIndexOf('giraffe')",
+ ]
+ for func_str in func_strs:
+ func = run_js(f"(beasts) => {func_str}")
+ assert func(a) == func(ajs)
+
+
+@run_in_pyodide
+def test_pyproxy_of_list_forEach(selenium):
+ from pyodide.code import run_js
+ from pyodide.ffi import to_js
+
+ a = ["a", "b", "c"]
+ ajs = to_js(a)
+
+ func = run_js(
+ """
+ ((a) => {
+ let s = "";
+ a.forEach((elt, idx, list) => {
+ s += "::";
+ s += idx;
+ s += elt;
+ s += this[elt];
+ },
+ {a: 6, b: 9, c: 22}
+ );
+ return s;
+ })
+ """
+ )
+
+ assert func(a) == func(ajs)
+
+
+@run_in_pyodide
+def test_pyproxy_of_list_map(selenium):
+ from pyodide.code import run_js
+ from pyodide.ffi import to_js
+
+ a = ["a", "b", "c"]
+ ajs = to_js(a)
+ func = run_js(
+ """
+ (a) => a.map(
+ function (elt, idx, list){
+ return [elt, idx, this[elt]]
+ },
+ {a: 6, b: 9, c: 22}
+ )
+ """
+ )
+ assert func(a).to_py() == func(ajs).to_py()
+
+
+@run_in_pyodide
+def test_pyproxy_of_list_filter(selenium):
+ from pyodide.code import run_js
+ from pyodide.ffi import to_js
+
+ a = list(range(20, 0, -2))
+ ajs = to_js(a)
+ func = run_js(
+ """
+ (a) => a.filter(
+ function (elt, idx){
+ return elt + idx > 12
+ }
+ )
+ """
+ )
+ assert func(a).to_py() == func(ajs).to_py()
+
+
+@run_in_pyodide
+def test_pyproxy_of_list_reduce(selenium):
+ from pyodide.code import run_js
+ from pyodide.ffi import to_js
+
+ a = list(range(20, 0, -2))
+ ajs = to_js(a)
+ func = run_js(
+ """
+ (a) => a.reduce((l, r) => l + 2*r)
+ """
+ )
+ assert func(a) == func(ajs)
+
+
+@run_in_pyodide
+def test_pyproxy_of_list_reduceRight(selenium):
+ from pyodide.code import run_js
+ from pyodide.ffi import to_js
+
+ a = list(range(20, 0, -2))
+ ajs = to_js(a)
+ func = run_js(
+ """
+ (a) => a.reduceRight((l, r) => l + 2*r)
+ """
+ )
+ assert func(a) == func(ajs)
+
+
+@run_in_pyodide
+def test_pyproxy_of_list_some(selenium):
+ from pyodide.code import run_js
+ from pyodide.ffi import to_js
+
+ func = run_js("(a) => a.some((element, idx) => (element + idx) % 2 === 0)")
+ for a in [
+ [1, 2, 3, 4, 5],
+ [2, 3, 4, 5],
+ [1, 3, 5],
+ [1, 4, 5],
+ [4, 5],
+ ]:
+ assert func(a) == func(to_js(a))
+
+
+@run_in_pyodide
+def test_pyproxy_of_list_every(selenium):
+ from pyodide.code import run_js
+ from pyodide.ffi import to_js
+
+ func = run_js("(a) => a.every((element, idx) => (element + idx) % 2 === 0)")
+ for a in [
+ [1, 2, 3, 4, 5],
+ [2, 3, 4, 5],
+ [1, 3, 5],
+ [1, 4, 5],
+ [4, 5],
+ ]:
+ assert func(a) == func(to_js(a))
+
+
+@run_in_pyodide
+def test_pyproxy_of_list_at(selenium):
+ from pyodide.code import run_js
+ from pyodide.ffi import to_js
+
+ a = [5, 12, 8, 130, 44]
+ ajs = to_js(a)
+
+ func = run_js("(a, idx) => a.at(idx)")
+ for idx in [2, 3, 4, -2, -3, -4, 5, 7, -7]:
+ assert func(a, idx) == func(ajs, idx)
+
+
+@run_in_pyodide
+def test_pyproxy_of_list_concat(selenium):
+ from pyodide.code import run_js
+ from pyodide.ffi import to_js
+
+ a = [[5, 12, 8], [130, 44], [6, 7, 7]]
+ ajs = to_js(a)
+
+ func = run_js("(a, b, c) => a.concat(b, c)")
+ assert func(*a).to_py() == func(*ajs).to_py()
+
+
+@run_in_pyodide
+def test_pyproxy_of_list_includes(selenium):
+ from pyodide.code import run_js
+ from pyodide.ffi import to_js
+
+ a = [5, 12, 8, 130, 44, 6, 7, 7]
+ ajs = to_js(a)
+
+ func = run_js("(a, n) => a.includes(n)")
+ for n in range(4, 10):
+ assert func(a, n) == func(ajs, n)
+
+
+@run_in_pyodide
+def test_pyproxy_of_list_entries(selenium):
+ from pyodide.code import run_js
+ from pyodide.ffi import to_js
+
+ a = [5, 12, 8, 130, 44, 6, 7, 7]
+ ajs = to_js(a)
+
+ func = run_js("(a, k) => Array.from(a[k]())")
+ for k in ["entries", "keys", "values"]:
+ assert func(a, k).to_py() == func(ajs, k).to_py()
+
+
+@run_in_pyodide
+def test_pyproxy_of_list_find(selenium):
+ from pyodide.code import run_js
+ from pyodide.ffi import to_js
+
+ a = [5, 12, 8, 130, 44, 6, 7, 7]
+ ajs = to_js(a)
+
+ func = run_js("(a, k) => a[k](element => element > 10)")
+ for k in ["find", "findIndex"]:
+ assert func(a, k) == func(ajs, k)
+
+
+@run_in_pyodide
+def test_pyproxy_of_list_sort(selenium):
+ # from
+ # https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/sort#creating_displaying_and_sorting_an_array
+ # Yes, JavaScript sort is weird.
+ from pyodide.code import run_js
+
+ stringArray = ["Blue", "Humpback", "Beluga"]
+ numberArray = [40, None, 1, 5, 200]
+ numericStringArray = ["80", "9", "700"]
+ mixedNumericArray = ["80", "9", "700", 40, 1, 5, 200]
+
+ run_js("globalThis.compareNumbers = (a, b) => a - b")
+
+ assert run_js("((a) => a.join())")(stringArray) == "Blue,Humpback,Beluga"
+ assert run_js("((a) => a.sort())")(stringArray) is stringArray
+ assert stringArray == ["Beluga", "Blue", "Humpback"]
+
+ assert run_js("((a) => a.join())")(numberArray) == "40,,1,5,200"
+ assert run_js("((a) => a.sort())")(numberArray) == [1, 200, 40, 5, None]
+ assert run_js("((a) => a.sort(compareNumbers))")(numberArray) == [
+ 1,
+ 5,
+ 40,
+ 200,
+ None,
+ ]
+
+ assert run_js("((a) => a.join())")(numericStringArray) == "80,9,700"
+ assert run_js("((a) => a.sort())")(numericStringArray) == ["700", "80", "9"]
+ assert run_js("((a) => a.sort(compareNumbers))")(numericStringArray) == [
+ "9",
+ "80",
+ "700",
+ ]
+
+ assert run_js("((a) => a.join())")(mixedNumericArray) == "80,9,700,40,1,5,200"
+ assert run_js("((a) => a.sort())")(mixedNumericArray) == [
+ 1,
+ 200,
+ 40,
+ 5,
+ "700",
+ "80",
+ "9",
+ ]
+ assert run_js("((a) => a.sort(compareNumbers))")(mixedNumericArray) == [
+ 1,
+ 5,
+ "9",
+ 40,
+ "80",
+ 200,
+ "700",
+ ]
+
+
+@run_in_pyodide
+def test_pyproxy_of_list_reverse(selenium):
+ from pyodide.code import run_js
+ from pyodide.ffi import to_js
+
+ a = [3, 2, 4, 1, 5]
+ ajs = to_js(a)
+
+ func = run_js("((a) => a.reverse())")
+ assert func(a) is a
+ func(ajs)
+ assert ajs.to_py() == a
+
+
[email protected](
+ "func",
+ [
+ 'splice(2, 0, "drum")',
+ 'splice(2, 0, "drum", "guitar")',
+ "splice(3, 1)",
+ 'splice(2, 1, "trumpet")',
+ 'splice(0, 2, "parrot", "anemone", "blue")',
+ "splice(2, 2)",
+ "splice(-2, 1)",
+ "splice(2)",
+ "splice()",
+ ],
+)
+@run_in_pyodide
+def test_pyproxy_of_list_splice(selenium, func):
+ from pyodide.code import run_js
+ from pyodide.ffi import to_js
+
+ a = ["angel", "clown", "mandarin", "sturgeon"]
+ ajs = to_js(a)
+
+ func = run_js(f"((a) => a.{func})")
+ assert func(a).to_py() == func(ajs).to_py()
+ assert a == ajs.to_py()
+
+
+@run_in_pyodide
+def test_pyproxy_of_list_push(selenium):
+ from pyodide.code import run_js
+ from pyodide.ffi import to_js
+
+ a = [4, 5, 6]
+ ajs = to_js(a)
+
+ func = run_js("(a) => a.push(1, 2, 3)")
+ assert func(a) == func(ajs)
+ assert ajs.to_py() == a
+
+ a = [4, 5, 6]
+ ajs = to_js(a)
+ func = run_js(
+ """
+ (a) => {
+ a.push(1);
+ a.push(2);
+ return a.push(3);
+ }
+ """
+ )
+ assert func(a) == func(ajs)
+ assert ajs.to_py() == a
+
+
+@run_in_pyodide
+def test_pyproxy_of_list_pop(selenium):
+ from pyodide.code import run_js
+ from pyodide.ffi import to_js
+
+ func = run_js("((a) => a.pop())")
+
+ for a in [
+ [],
+ ["broccoli", "cauliflower", "cabbage", "kale", "tomato"],
+ ]:
+ ajs = to_js(a)
+ assert func(a) == func(ajs)
+ assert ajs.to_py() == a
+
+
+@run_in_pyodide
+def test_pyproxy_of_list_shift(selenium):
+ from pyodide.code import run_js
+ from pyodide.ffi import to_js
+
+ a = ["Andrew", "Tyrone", "Paul", "Maria", "Gayatri"]
+ ajs = to_js(a)
+
+ func = run_js(
+ """
+ (a) => {
+ let result = [];
+ while (typeof (i = a.shift()) !== "undefined") {
+ result.push(i);
+ }
+ return result;
+ }
+ """
+ )
+ assert func(a).to_py() == func(ajs).to_py()
+ assert a == []
+ assert ajs.to_py() == []
+
+
+@run_in_pyodide
+def test_pyproxy_of_list_unshift(selenium):
+ from pyodide.code import run_js
+ from pyodide.ffi import to_js
+
+ a = [4, 5, 6]
+ ajs = to_js(a)
+
+ func = run_js("(a) => a.unshift(1, 2, 3)")
+ assert func(a) == func(ajs)
+ assert ajs.to_py() == a
+
+ a = [4, 5, 6]
+ ajs = to_js(a)
+ func = run_js(
+ """
+ (a) => {
+ a.unshift(1);
+ a.unshift(2);
+ return a.unshift(3);
+ }
+ """
+ )
+ assert func(a) == func(ajs)
+ assert ajs.to_py() == a
+
+
[email protected](
+ "func",
+ [
+ "copyWithin(-2)",
+ "copyWithin(0, 3)",
+ "copyWithin(0, 3, 4)",
+ "copyWithin(-2, -3, -1)",
+ ],
+)
+@run_in_pyodide
+def test_pyproxy_of_list_copyWithin(selenium, func):
+ from pyodide.code import run_js
+ from pyodide.ffi import to_js
+
+ a = ["a", "b", "c", "d", "e"]
+ ajs = to_js(a)
+ func = run_js(f"(a) => a.{func}")
+ assert func(a) is a
+ func(ajs)
+ assert a == ajs.to_py()
+
+
[email protected](
+ "func",
+ [
+ "fill(0, 2, 4)",
+ "fill(5, 1)",
+ "fill(6)",
+ ],
+)
+@run_in_pyodide
+def test_pyproxy_of_list_fill(selenium, func):
+ from pyodide.code import run_js
+ from pyodide.ffi import to_js
+
+ a = ["a", "b", "c", "d", "e"]
+ ajs = to_js(a)
+ func = run_js(f"(a) => a.{func}")
+ assert func(a) is a
+ func(ajs)
+ assert a == ajs.to_py()
| Proxied JS method.apply(context, list) fail while method(*list) doesn't
## π Bug
While working on [this PyScript issue](https://github.com/pyscript/pyscript/pull/1459) I've noticed that `context.method.apply(context, list)` doesn't work while `context.method(*list)` does.
I don't mind using the latter as that's also more Pythonic but that might surprise JS developers using Pyodide proxies that mimic JS APIs.
### To Reproduce
```python
import js
classList = js.document.body.classList
classList.add.apply(classList, ["a", "b"])
```
### Expected behavior
The method should be invoked with *n* arguments as by JS specs.
### Environment
- Pyodide Version<!-- (e.g. 1.8.1) -->: latest
- Browser version<!-- (e.g. Chrome 95.0.4638.54) -->: any
- Any other relevant information: nope
<!-- If you are building Pyodide by yourself, please also include these information: -->
<!--
- Commit hash of Pyodide git repository:
- Build environment<!--(e.g. Ubuntu 18.04, pyodide/pyodide-env:19 docker)- ->:
-->
### Additional context
Happy to have it as won't fix but at least there's a related issue that explain the *gotcha*.
| Unraveling this a bit:
1.
```js
function a(...params) {
console.log(params);
}
pyodide.runPython(`
from js import a
a.apply(None, [7, 6, 5])
`)
```
prints `[undefined, undefined, undefined]`.
2.
```js
function a(...params) {
console.log(params);
}
let l = pyodide.runPython(`[7, 6, 5]`)
a.apply(null, l);
```
also prints `[undefined, undefined, undefined]`.
Per the tc39 spec, [`apply`](https://tc39.es/ecma262/#sec-function.prototype.apply) uses [`CreateListFromArrayLike`](https://tc39.es/ecma262/#sec-createlistfromarraylike) to generate the argument list. The relevant snippet of `CreateListFromArrayLike` is:
> 5. Let index be 0.
> 6. Repeat, while index < len,
>
> a. Let indexName be ! [ToString](https://tc39.es/ecma262/#sec-tostring)([π½](https://tc39.es/ecma262/#%F0%9D%94%BD)(index)).
> b. Let next be ? [Get](https://tc39.es/ecma262/#sec-get-o-p)(obj, indexName).
> c. If elementTypes does not contain [Type](https://tc39.es/ecma262/#sec-ecmascript-data-types-and-values)(next), throw a TypeError exception.
> d. Append next to list.
> e. Set index to index + 1.
Focusing in on this step 6b, we see the problem:
```js
let l = pyodide.runPython(`[7, 6, 5]`);
console.log(l[0]); // undefined
```
Oops, `l[0]` should probably give `7`, not undefined.
Not sure if proxy related but indexes in the get (or has) are always passed as strings, even for proxied arrays
I have a fix on a local branch, but I've gotten distracted implementing other `Array.prototype` methods on PyProxies... | 2023-05-13T06:29:25 |
pyodide/pyodide | 3,868 | pyodide__pyodide-3868 | [
"3867"
] | a62a319ab7f539af86b43bd92c89e42ee93c0b37 | diff --git a/src/py/_pyodide/_core_docs.py b/src/py/_pyodide/_core_docs.py
--- a/src/py/_pyodide/_core_docs.py
+++ b/src/py/_pyodide/_core_docs.py
@@ -988,6 +988,10 @@ def _new_exc(cls, name: str, message: str = "", stack: str = "") -> "JsException
result.stack = stack
return result
+ @classmethod
+ def new(cls, *args: Any) -> "JsException":
+ return cls()
+
def __str__(self):
return f"{self.name}: {self.message}"
| diff --git a/src/tests/test_pyodide.py b/src/tests/test_pyodide.py
--- a/src/tests/test_pyodide.py
+++ b/src/tests/test_pyodide.py
@@ -583,6 +583,18 @@ def test_run_python_js_error(selenium):
)
[email protected]_browsers(node="No DOMException in node")
+@run_in_pyodide
+def test_run_python_dom_error(selenium):
+ import pytest
+
+ from js import DOMException
+ from pyodide.ffi import JsException
+
+ with pytest.raises(JsException, match="oops"):
+ raise DOMException.new("oops")
+
+
def test_run_python_locals(selenium):
selenium.run_js(
"""
diff --git a/src/tests/test_typeconversions.py b/src/tests/test_typeconversions.py
--- a/src/tests/test_typeconversions.py
+++ b/src/tests/test_typeconversions.py
@@ -951,6 +951,10 @@ def test_dict_js2py2js(selenium):
def test_error_js2py2js(selenium):
selenium.run_js("self.err = new Error('hello there?');")
assert_js_to_py_to_js(selenium, "err")
+ if selenium.browser == "node":
+ return
+ selenium.run_js("self.err = new DOMException('hello there?');")
+ assert_js_to_py_to_js(selenium, "err")
def test_error_py2js2py(selenium):
| Aborted fetch requests freeze REPL
## π Bug
Fetch requests aborted (using the [signal](https://developer.mozilla.org/en-US/docs/Web/API/AbortSignal/timeout_static#examples) options) freeze REPL, and do not raise exception.
### To Reproduce
```python
def fetch_response(url, timeout):
options = js.Object.new()
options.signal = js.AbortSignal.timeout(timeout)
return js.fetch(url, options)
response = await fetch_response('slow api', 1)
```
Dev Console shows:
```
Uncaught (in promise) PythonError: TypeError: invalid exception object
at new_error (pyodide.asm.js:9:14992)
at pyodide.asm.wasm:0x152d67
at pyodide.asm.wasm:0x152e6c
at Module.callPyObjectKwargs (pyodide.asm.js:9:75811)
at Module.callPyObject (pyodide.asm.js:9:76020)
at onRejected (pyodide.asm.js:9:59090)
```
This appears to occur because the `PyodideFuture` object, which the fetch returns, expects to receive a python Exception object on rejection. Instead, the returned `PyodideFuture` object gets a `JsProxy` of an `AbortError` (`DOMException`). A `JsProxy` of a `DOMException` can't be raised in pyodide.
```python
>>> import js
>>> raise js.DOMException.new('')
```
```
Traceback (most recent call last):
File "<console>", line 1, in <module>
TypeError: exceptions must derive from BaseException
```
### Expected behavior
Should raise a `JsException`.
Possible solution: Allow js.DOMException objects to be raised much like js.Error objects:
```python
>>> from pyodide.webloop import PyodideFuture
>>> fut = PyodideFuture()
>>> fut.set_exception(js.Error('hi'))
>>> fut.exception()
Error: hi
```
### Environment
- Pyodide Version<0.23.2>:
```
>>> import pyodide
>>> pyodide.__version__
'0.23.2'
```
- Browser version<Chrome 113.0.5672.114->:
| Interesting... This happens only in Chrome: in Firefox `raise js.DomException("")` works as expected.
Thanks for the report @stephenmccalman!
This reproduces all the way back to Pyodide 0.17.0. It's a bit surprising nobody has noticed it before.
It seems that we feature detect exceptions with:
```
hasProperty(obj, "name") &&
hasProperty(obj, "message") &&
hasProperty(obj, "stack")
```
In Chrome, `DOMException` has no `stack` property (but in firefox it has it).
Interestingly this behavior of chrome is not compliant with a "should" in the whatwg spec:
> if an implementation gives native [Error](https://tc39.es/ecma262/multipage/fundamental-objects.html#sec-error-objects) objects special powers or nonstandard properties (such as a stack property), it should also expose those on [DOMException](https://webidl.spec.whatwg.org/#idl-DOMException) objects.
https://webidl.spec.whatwg.org/#es-DOMException-specialness
This bug also occurs in Safari. | 2023-05-18T04:53:12 |
pyodide/pyodide | 3,959 | pyodide__pyodide-3959 | [
"3948"
] | 313bc4bb72668937e63c8b72845a9b6c606c49cf | diff --git a/pyodide-build/pyodide_build/build_env.py b/pyodide-build/pyodide_build/build_env.py
--- a/pyodide-build/pyodide_build/build_env.py
+++ b/pyodide-build/pyodide_build/build_env.py
@@ -199,6 +199,7 @@ def _get_make_environment_vars(*, pyodide_root: Path | None = None) -> dict[str,
["make", "-f", str(PYODIDE_ROOT / "Makefile.envs"), ".output_vars"],
capture_output=True,
text=True,
+ env={"PYODIDE_ROOT": str(PYODIDE_ROOT)},
)
if result.returncode != 0:
| diff --git a/pyodide-build/pyodide_build/tests/fixture.py b/pyodide-build/pyodide_build/tests/fixture.py
--- a/pyodide-build/pyodide_build/tests/fixture.py
+++ b/pyodide-build/pyodide_build/tests/fixture.py
@@ -1,11 +1,14 @@
import json
+import os
import shutil
from pathlib import Path
from typing import Any
import pytest
-from ..common import chdir
+from conftest import ROOT_PATH
+from pyodide_build import build_env
+from pyodide_build.common import chdir
@pytest.fixture(scope="module")
@@ -89,3 +92,59 @@ def temp_xbuildenv(tmp_path_factory):
archive_name = shutil.make_archive("xbuildenv", "tar")
yield base, archive_name
+
+
[email protected](scope="function")
+def reset_env_vars():
+ # Will reset the environment variables to their original values after each test.
+
+ os.environ.pop("PYODIDE_ROOT", None)
+ old_environ = dict(os.environ)
+
+ try:
+ yield
+ finally:
+ os.environ.clear()
+ os.environ.update(old_environ)
+
+
[email protected](scope="function")
+def reset_cache():
+ # Will remove all caches before each test.
+
+ build_env.get_pyodide_root.cache_clear()
+ build_env.get_build_environment_vars.cache_clear()
+ build_env.get_unisolated_packages.cache_clear()
+
+ yield
+
+
[email protected](scope="function")
+def xbuildenv(selenium, tmp_path, reset_env_vars, reset_cache):
+ import subprocess as sp
+
+ assert "PYODIDE_ROOT" not in os.environ
+
+ envpath = Path(tmp_path) / ".pyodide-xbuildenv"
+ result = sp.run(
+ [
+ "pyodide",
+ "xbuildenv",
+ "create",
+ str(envpath),
+ "--root",
+ ROOT_PATH,
+ "--skip-missing-files",
+ ]
+ )
+
+ assert result.returncode == 0
+
+ cur_dir = os.getcwd()
+
+ os.chdir(tmp_path)
+
+ try:
+ yield tmp_path
+ finally:
+ os.chdir(cur_dir)
diff --git a/pyodide-build/pyodide_build/tests/test_build_env.py b/pyodide-build/pyodide_build/tests/test_build_env.py
--- a/pyodide-build/pyodide_build/tests/test_build_env.py
+++ b/pyodide-build/pyodide_build/tests/test_build_env.py
@@ -1,40 +1,15 @@
+# flake8: noqa
# This file contains tests that ensure build environment is properly initialized in
# both in-tree and out-of-tree builds.
-# TODO: move functions that are tested here to a separate module
-
import os
-from pathlib import Path
import pytest
from conftest import ROOT_PATH
from pyodide_build import build_env, common
-
[email protected](scope="function")
-def reset_env_vars():
- # Will reset the environment variables to their original values after each test.
-
- os.environ.pop("PYODIDE_ROOT", None)
- old_environ = dict(os.environ)
-
- try:
- yield
- finally:
- os.environ.clear()
- os.environ.update(old_environ)
-
-
[email protected](scope="function")
-def reset_cache():
- # Will remove all caches before each test.
-
- build_env.get_pyodide_root.cache_clear()
- build_env.get_build_environment_vars.cache_clear()
- build_env.get_unisolated_packages.cache_clear()
-
- yield
+from .fixture import reset_cache, reset_env_vars, xbuildenv
class TestInTree:
@@ -96,6 +71,13 @@ def test_get_build_environment_vars(self, reset_env_vars, reset_cache):
for var in extra_vars:
assert var in build_vars, f"Missing {var}"
+ def test_get_make_environment_vars(self, reset_env_vars, reset_cache):
+ make_vars = build_env._get_make_environment_vars()
+ assert make_vars["PYODIDE_ROOT"] == str(ROOT_PATH)
+
+ make_vars = build_env._get_make_environment_vars(pyodide_root=ROOT_PATH)
+ assert make_vars["PYODIDE_ROOT"] == str(ROOT_PATH)
+
def test_get_build_flag(self, reset_env_vars, reset_cache):
for key, val in build_env.get_build_environment_vars().items():
assert build_env.get_build_flag(key) == val
@@ -141,41 +123,6 @@ def test_get_build_environment_vars_host_env(
class TestOutOfTree(TestInTree):
- # TODO: selenium fixture is a hack to make these tests run only after building Pyodide.
- @pytest.fixture(scope="function", autouse=True)
- def xbuildenv(self, selenium, tmp_path, reset_env_vars, reset_cache):
- import subprocess as sp
-
- assert "PYODIDE_ROOT" not in os.environ
-
- envpath = Path(tmp_path) / ".pyodide-xbuildenv"
- result = sp.run(
- [
- "pyodide",
- "xbuildenv",
- "create",
- str(envpath),
- "--root",
- ROOT_PATH,
- "--skip-missing-files",
- ]
- )
-
- assert result.returncode == 0
-
- yield tmp_path
-
- @pytest.fixture(scope="function", autouse=True)
- def chdir_xbuildenv(self, xbuildenv):
- cur_dir = os.getcwd()
-
- os.chdir(xbuildenv)
-
- try:
- yield
- finally:
- os.chdir(cur_dir)
-
# Note: other tests are inherited from TestInTree
def test_init_environment(self, xbuildenv, reset_env_vars, reset_cache):
@@ -196,9 +143,17 @@ def test_get_pyodide_root(self, xbuildenv, reset_env_vars, reset_cache):
== xbuildenv / ".pyodide-xbuildenv/xbuildenv/pyodide-root"
)
- def test_in_xbuildenv(self, reset_env_vars, reset_cache):
+ def test_in_xbuildenv(self, xbuildenv, reset_env_vars, reset_cache):
assert build_env.in_xbuildenv()
+ def test_get_make_environment_vars(self, xbuildenv, reset_env_vars, reset_cache):
+ xbuildenv_root = xbuildenv / ".pyodide-xbuildenv/xbuildenv/pyodide-root"
+ make_vars = build_env._get_make_environment_vars()
+ assert make_vars["PYODIDE_ROOT"] == str(xbuildenv_root)
+
+ make_vars = build_env._get_make_environment_vars(pyodide_root=xbuildenv_root)
+ assert make_vars["PYODIDE_ROOT"] == str(xbuildenv_root)
+
def test_check_emscripten_version(monkeypatch):
s = None
diff --git a/pyodide-build/pyodide_build/tests/test_pypi.py b/pyodide-build/pyodide_build/tests/test_pypi.py
--- a/pyodide-build/pyodide_build/tests/test_pypi.py
+++ b/pyodide-build/pyodide_build/tests/test_pypi.py
@@ -1,4 +1,5 @@
-import os
+# flake8: noqa
+
import re
import subprocess
import sys
@@ -14,7 +15,7 @@
from typer.testing import CliRunner
from pyodide_build.cli import build
-from pyodide_build.common import chdir
+from .fixture import reset_cache, reset_env_vars, xbuildenv
runner = CliRunner()
@@ -213,16 +214,15 @@ def fake_pypi_url(fake_pypi_server):
pyodide_build.out_of_tree.pypi._PYPI_INDEX = pypi_old
-def test_fetch_or_build_pypi(selenium, tmp_path):
+def test_fetch_or_build_pypi(xbuildenv):
# TODO: - make test run without pyodide
- output_dir = tmp_path / "dist"
+ output_dir = xbuildenv / "dist"
# one pure-python package (doesn't need building) and one sdist package (needs building)
pkgs = ["pytest-pyodide", "pycryptodome==3.15.0"]
app = typer.Typer()
app.command()(build.main)
- os.chdir(tmp_path)
for p in pkgs:
result = runner.invoke(
app,
@@ -234,16 +234,15 @@ def test_fetch_or_build_pypi(selenium, tmp_path):
assert len(built_wheels) == len(pkgs)
-def test_fetch_or_build_pypi_with_deps_and_extras(selenium, tmp_path):
+def test_fetch_or_build_pypi_with_deps_and_extras(xbuildenv):
# TODO: - make test run without pyodide
- output_dir = tmp_path / "dist"
+ output_dir = xbuildenv / "dist"
# one pure-python package (doesn't need building) which depends on one sdist package (needs building)
pkgs = ["eth-hash[pycryptodome]==0.5.1", "safe-pysha3 (>=1.0.0)"]
app = typer.Typer()
app.command()(build.main)
- os.chdir(tmp_path)
for p in pkgs:
result = runner.invoke(
app,
@@ -255,38 +254,36 @@ def test_fetch_or_build_pypi_with_deps_and_extras(selenium, tmp_path):
assert len(built_wheels) == 3
-def test_fake_pypi_succeed(selenium, tmp_path, fake_pypi_url):
+def test_fake_pypi_succeed(xbuildenv, fake_pypi_url):
# TODO: - make test run without pyodide
- output_dir = tmp_path / "dist"
+ output_dir = xbuildenv / "dist"
# build package that resolves right
app = typer.Typer()
app.command()(build.main)
- with chdir(tmp_path):
- result = runner.invoke(
- app,
- ["resolves-package", "--build-dependencies"],
- )
+ result = runner.invoke(
+ app,
+ ["resolves-package", "--build-dependencies"],
+ )
- assert result.exit_code == 0, str(result.stdout) + str(result)
+ assert result.exit_code == 0, str(result.stdout) + str(result)
built_wheels = set(output_dir.glob("*.whl"))
assert len(built_wheels) == 5
-def test_fake_pypi_resolve_fail(selenium, tmp_path, fake_pypi_url):
- # TODO: - make test run without pyodide
- output_dir = tmp_path / "dist"
+def test_fake_pypi_resolve_fail(xbuildenv, fake_pypi_url):
+ output_dir = xbuildenv / "dist"
+
# build package that resolves right
app = typer.Typer()
app.command()(build.main)
- with chdir(tmp_path):
- result = runner.invoke(
- app,
- ["fails-package", "--build-dependencies"],
- )
+ result = runner.invoke(
+ app,
+ ["fails-package", "--build-dependencies"],
+ )
# this should fail and should not build any wheels
assert result.exit_code != 0, result.stdout
@@ -294,18 +291,17 @@ def test_fake_pypi_resolve_fail(selenium, tmp_path, fake_pypi_url):
assert len(built_wheels) == 0
-def test_fake_pypi_extras_build(selenium, tmp_path, fake_pypi_url):
+def test_fake_pypi_extras_build(xbuildenv, fake_pypi_url):
# TODO: - make test run without pyodide
- output_dir = tmp_path / "dist"
+ output_dir = xbuildenv / "dist"
# build package that resolves right
app = typer.Typer()
app.command()(build.main)
- with chdir(tmp_path):
- result = runner.invoke(
- app,
- ["pkg-b[docs]", "--build-dependencies"],
- )
+ result = runner.invoke(
+ app,
+ ["pkg-b[docs]", "--build-dependencies"],
+ )
# this should work
assert result.exit_code == 0, result.stdout
@@ -313,16 +309,16 @@ def test_fake_pypi_extras_build(selenium, tmp_path, fake_pypi_url):
assert len(built_wheels) == 2
-def test_fake_pypi_repeatable_build(selenium, tmp_path, fake_pypi_url):
- # TODO: - make test run without pyodide
- output_dir = tmp_path / "dist"
+def test_fake_pypi_repeatable_build(xbuildenv, fake_pypi_url):
+ output_dir = xbuildenv / "dist"
+
# build package that resolves right
app = typer.Typer()
app.command()(build.main)
# override a dependency version and build
# pkg-a
- with open(tmp_path / "requirements.txt", "w") as req_file:
+ with open("requirements.txt", "w") as req_file:
req_file.write(
"""
# Whole line comment
@@ -330,17 +326,17 @@ def test_fake_pypi_repeatable_build(selenium, tmp_path, fake_pypi_url):
pkg-a
"""
)
- with chdir(tmp_path):
- result = runner.invoke(
- app,
- [
- "-r",
- "requirements.txt",
- "--build-dependencies",
- "--output-lockfile",
- "lockfile.txt",
- ],
- )
+
+ result = runner.invoke(
+ app,
+ [
+ "-r",
+ "requirements.txt",
+ "--build-dependencies",
+ "--output-lockfile",
+ "lockfile.txt",
+ ],
+ )
# this should work
assert result.exit_code == 0, result.stdout
built_wheels = list(output_dir.glob("*.whl"))
@@ -354,11 +350,10 @@ def test_fake_pypi_repeatable_build(selenium, tmp_path, fake_pypi_url):
# rebuild from package-versions lockfile and
# check it outputs the same version number
- with chdir(tmp_path):
- result = runner.invoke(
- app,
- ["-r", str(tmp_path / "lockfile.txt")],
- )
+ result = runner.invoke(
+ app,
+ ["-r", "lockfile.txt"],
+ )
# should still have built 1.0.0 of pkg-c
built_wheels = list(output_dir.glob("*.whl"))
@@ -369,7 +364,7 @@ def test_fake_pypi_repeatable_build(selenium, tmp_path, fake_pypi_url):
assert len(built_wheels) == 2, result.stdout
-def test_bad_requirements_text(selenium, tmp_path):
+def test_bad_requirements_text(xbuildenv):
app = typer.Typer()
app.command()(build.main)
# test 1 - error on URL location in requirements
@@ -377,11 +372,11 @@ def test_bad_requirements_text(selenium, tmp_path):
# test 3 - error on editable install of package
bad_lines = [" pkg-c@http://www.pkg-c.org", " -r bob.txt", " -e pkg-c"]
for line in bad_lines:
- with open(tmp_path / "requirements.txt", "w") as req_file:
+ with open("requirements.txt", "w") as req_file:
req_file.write(line + "\n")
- with chdir(tmp_path):
- result = runner.invoke(
- app,
- ["-r", "requirements.txt"],
- )
- assert result.exit_code != 0 and line.strip() in str(result)
+
+ result = runner.invoke(
+ app,
+ ["-r", "requirements.txt"],
+ )
+ assert result.exit_code != 0 and line.strip() in str(result)
| pyodide-build test suite fails locally
I'm trying to run the pyodide-build test suite locally inside Docker,
```
source pyodide_env.sh
pytest pyodide-build
```
and so far it has had multiple failures,
```
FAILED pyodide-build/pyodide_build/tests/test_build_env.py::TestOutOfTree::get_build_environment_vars[node] - PermissionError: [Errno 13] Permission denied: '/packages'
FAILED pyodide-build/pyodide_build/tests/test_build_env.py::TestOutOfTree::get_build_flag[node] - PermissionError: [Errno 13] Permission denied: '/packages'
FAILED pyodide-build/pyodide_build/tests/test_build_env.py::TestOutOfTree::init_environment[node] - PermissionError: [Errno 13] Permission denied: '/packages'
FAILED pyodide-build/pyodide_build/tests/test_build_env.py::TestOutOfTree::get_pyodide_root[node] - PermissionError: [Errno 13] Permission denied: '/packages'
FAILED pyodide-build/pyodide_build/tests/test_build_env.py::TestOutOfTree::in_xbuildenv[node] - PermissionError: [Errno 13] Permission denied: '/packages'
FAILED pyodide-build/pyodide_build/tests/test_pypi.py::fetch_or_build_pypi[node] - AssertionError: * Creating virtualenv isolated environment...
FAILED pyodide-build/pyodide_build/tests/test_pypi.py::fetch_or_build_pypi_with_deps_and_extras[node] - AssertionError: Successfully fetched: eth_hash-0.5.2-py3-none-any.whl
FAILED pyodide-build/pyodide_build/tests/test_pypi.py::fake_pypi_succeed[node] - AssertionError: 127.0.0.1 - - [23/Jun/2023 14:06:56] "GET /simple/resolves-package/ HTTP/1.1" 200 -
FAILED pyodide-build/pyodide_build/tests/test_pypi.py::fake_pypi_extras_build[node] - AssertionError: 127.0.0.1 - - [23/Jun/2023 14:07:02] "GET /simple/pkg-b/ HTTP/1.1" 200 -
```
Maybe I'm doing something wrong or I should spend time reading through what we are doing in the CircleCI config but IMO the above should have worked, or else we need to update [developer instructions](https://pyodide.org/en/stable/development/testing.html#testing-and-benchmarking).
Anyway, I can probably figure it out by spending enough time on it, but we probably need to improve the docs as this is not very developer friendly.
| Probably `PYODIDE_ROOT` is defined and empty for some reason?
What happens if you say `export PYODIDE_ROOT=$(pwd)` first?
Okay it still fails.
I think made this bug while working on #3922. Will open a PR that fix this. | 2023-06-26T13:45:40 |
pyodide/pyodide | 4,018 | pyodide__pyodide-4018 | [
"4017"
] | 0e3678cf5889c99a0c18486ee02fe5740e580082 | diff --git a/pyodide-build/pyodide_build/common.py b/pyodide-build/pyodide_build/common.py
--- a/pyodide-build/pyodide_build/common.py
+++ b/pyodide-build/pyodide_build/common.py
@@ -371,3 +371,9 @@ def get_wheel_dist_info_dir(wheel: ZipFile, pkg_name: str) -> str:
)
return info_dir
+
+
+def check_wasm_magic_number(file_path: Path) -> bool:
+ WASM_BINARY_MAGIC = b"\0asm"
+ with file_path.open(mode="rb") as file:
+ return file.read(4) == WASM_BINARY_MAGIC
| diff --git a/pyodide-build/pyodide_build/tests/test_common.py b/pyodide-build/pyodide_build/tests/test_common.py
--- a/pyodide-build/pyodide_build/tests/test_common.py
+++ b/pyodide-build/pyodide_build/tests/test_common.py
@@ -3,6 +3,7 @@
import pytest
from pyodide_build.common import (
+ check_wasm_magic_number,
environment_substitute_args,
extract_wheel_metadata_file,
find_missing_executables,
@@ -152,3 +153,14 @@ def test_extract_wheel_metadata_file(tmp_path):
with pytest.raises(Exception):
extract_wheel_metadata_file(input_path_empty, output_path_empty)
+
+
+def test_check_wasm_magic_number(tmp_path):
+ wasm_magic_number = b"\x00asm\x01\x00\x00\x00\x00\x11"
+ not_wasm_magic_number = b"\x7fELF\x02\x01\x01\x00\x00\x00"
+
+ (tmp_path / "goodfile.so").write_bytes(wasm_magic_number)
+ assert check_wasm_magic_number(tmp_path / "goodfile.so") is True
+
+ (tmp_path / "badfile.so").write_bytes(not_wasm_magic_number)
+ assert check_wasm_magic_number(tmp_path / "badfile.so") is False
| Add check for WASM magic number in .so files
> `pyodide build` now replaces native `.so` slugs with Emscripten
> slugs. Usually `.so`s in the generated wheels are actually Emscripten `.so`s
> so this is good. If they are actually native `.so`s then there is a problem
> either way.
Not very critical, but should we actually check that the .so are emscripten .so rather than native .so that ended up there by mistake? For instance, we could check for the [WASM magic number](https://openhome.cc/eGossip/WebAssembly/Module.html) in the first 4 bytes maybe? It's supposed to be, `0061 736d`. Though I get the same bytes but in a different order, when I try,
```
$ hexdump -n 8 numpy/core/_multiarray_umath.cpython-311-wasm32-emscripten.so
0000000 6100 6d73 0001 0000
```
but maybe I'm using hexdump wrong (**Edit:** yes, with the `-C` option it's better)
_Originally posted by @rth in https://github.com/pyodide/pyodide/issues/3927#issuecomment-1599511454_
| 2023-07-26T11:19:48 |
|
pyodide/pyodide | 4,090 | pyodide__pyodide-4090 | [
"4089",
"4089"
] | 7b0af2d4cdf761e1d641cb9932a317b1eab0f42f | diff --git a/conftest.py b/conftest.py
--- a/conftest.py
+++ b/conftest.py
@@ -40,6 +40,10 @@
pyodide.pyimport("pyodide_js._api")
"""
+only_node = pytest.mark.xfail_browsers(
+ chrome="node only", firefox="node only", safari="node only"
+)
+
def pytest_addoption(parser):
group = parser.getgroup("general")
| diff --git a/src/tests/test_streams.py b/src/tests/test_streams.py
--- a/src/tests/test_streams.py
+++ b/src/tests/test_streams.py
@@ -1,7 +1,7 @@
import pytest
from pytest_pyodide import run_in_pyodide
-from conftest import strip_assertions_stderr
+from conftest import only_node, strip_assertions_stderr
@pytest.mark.skip_refcount_check
@@ -576,3 +576,49 @@ def test_custom_stdout_interrupts(selenium, method):
pyodide.setStdout();
"""
)
+
+
+@only_node
+@run_in_pyodide
+def test_node_eagain(selenium):
+ from pyodide.code import run_js
+
+ result = run_js(
+ """
+ pyodide.setStdin({
+ i: 0,
+ stdin() {
+ this.i ++;
+ if (this.i < 3) {
+ throw {code: "EAGAIN"};
+ }
+ this.i = 0;
+ return "abcdefg";
+ }
+ });
+ let result = [];
+ pyodide.setStdout({
+ i: 0,
+ write(a) {
+ this.i ++;
+ if (this.i < 3) {
+ throw {code: "EAGAIN"};
+ }
+ this.i = 0;
+ result.push(new TextDecoder().decode(a));
+ return a.length;
+ }
+ });
+ result
+ """
+ )
+ try:
+ assert input() == "abcdefg"
+ print("hi there!")
+ assert result[0] == "hi there!\n"
+ finally:
+ run_js(
+ """
+ pyodide.setStdin();
+ """
+ )
| New Pyodide fatal error in scipy tests: Error: EAGAIN: resource temporarily unavailable, write
This started to happen two days ago in https://github.com/lesteve/scipy-tests-pyodide, here is [a build log](https://github.com/lesteve/scipy-tests-pyodide/actions/runs/5946896593/job/16128148017).
The stack trace looks like this:
```
Error: EAGAIN: resource temporarily unavailable, write
at Object.writeSync (node:fs:936:3)
at ue.write (/home/runner/work/scipy-tests-pyodide/scipy-tests-pyodide/node_modules/pyodide/pyodide.asm.js:6566:23)
at Object.write (/home/runner/work/scipy-tests-pyodide/scipy-tests-pyodide/node_modules/pyodide/pyodide.asm.js:6301:28)
at Object.write (/home/runner/work/scipy-tests-pyodide/scipy-tests-pyodide/node_modules/pyodide/pyodide.asm.js:12457:46)
at doWritev (/home/runner/work/scipy-tests-pyodide/scipy-tests-pyodide/node_modules/pyodide/pyodide.asm.js:19506:23)
at _fd_write (/home/runner/work/scipy-tests-pyodide/scipy-tests-pyodide/node_modules/pyodide/pyodide.asm.js:19589:19)
at write (wasm://wasm/025b4bda:wasm-function[9088]:0x45849f)
at _Py_write (wasm://wasm/025b4bda:wasm-function[4144]:0x2d9eec)
at _io_FileIO_write (wasm://wasm/025b4bda:wasm-function[6443]:0x39de9f)
at _PyCFunctionWithKeywords_TrampolineCall (/home/runner/work/scipy-tests-pyodide/scipy-tests-pyodide/node_modules/pyodide/pyodide.asm.js:6855:33) {
errno: -11,
syscall: 'write',
code: 'EAGAIN',
pyodide_fatal_error: true
}
```
For some reason, it seems to happen right at the end of `scipy.special.tests` when pytest is printing its summary. In my experience, the timing of stdout vs stderr can not be fully trusted so maybe it happens in a test towards the end of scipy.special.tests. I'll be able to look into it more next week.
My wild guess is that this could be related to #4035?
New Pyodide fatal error in scipy tests: Error: EAGAIN: resource temporarily unavailable, write
This started to happen two days ago in https://github.com/lesteve/scipy-tests-pyodide, here is [a build log](https://github.com/lesteve/scipy-tests-pyodide/actions/runs/5946896593/job/16128148017).
The stack trace looks like this:
```
Error: EAGAIN: resource temporarily unavailable, write
at Object.writeSync (node:fs:936:3)
at ue.write (/home/runner/work/scipy-tests-pyodide/scipy-tests-pyodide/node_modules/pyodide/pyodide.asm.js:6566:23)
at Object.write (/home/runner/work/scipy-tests-pyodide/scipy-tests-pyodide/node_modules/pyodide/pyodide.asm.js:6301:28)
at Object.write (/home/runner/work/scipy-tests-pyodide/scipy-tests-pyodide/node_modules/pyodide/pyodide.asm.js:12457:46)
at doWritev (/home/runner/work/scipy-tests-pyodide/scipy-tests-pyodide/node_modules/pyodide/pyodide.asm.js:19506:23)
at _fd_write (/home/runner/work/scipy-tests-pyodide/scipy-tests-pyodide/node_modules/pyodide/pyodide.asm.js:19589:19)
at write (wasm://wasm/025b4bda:wasm-function[9088]:0x45849f)
at _Py_write (wasm://wasm/025b4bda:wasm-function[4144]:0x2d9eec)
at _io_FileIO_write (wasm://wasm/025b4bda:wasm-function[6443]:0x39de9f)
at _PyCFunctionWithKeywords_TrampolineCall (/home/runner/work/scipy-tests-pyodide/scipy-tests-pyodide/node_modules/pyodide/pyodide.asm.js:6855:33) {
errno: -11,
syscall: 'write',
code: 'EAGAIN',
pyodide_fatal_error: true
}
```
For some reason, it seems to happen right at the end of `scipy.special.tests` when pytest is printing its summary. In my experience, the timing of stdout vs stderr can not be fully trusted so maybe it happens in a test towards the end of scipy.special.tests. I'll be able to look into it more next week.
My wild guess is that this could be related to #4035?
| Sure sounds related, thanks for the report!
Seems like we already solved a similar problem with reads from stdin with `readHelper` here:
https://github.com/pyodide/pyodide/blob/main/src/js/streams.ts#L93-L110
I think we just need the same logic for `writeHelper`.
Sure sounds related, thanks for the report!
Seems like we already solved a similar problem with reads from stdin with `readHelper` here:
https://github.com/pyodide/pyodide/blob/main/src/js/streams.ts#L93-L110
I think we just need the same logic for `writeHelper`.
| 2023-08-24T09:13:36 |
pyodide/pyodide | 4,136 | pyodide__pyodide-4136 | [
"4118"
] | ab35dea36ecd92b7748932657514c2542bbdc849 | diff --git a/pyodide-build/pyodide_build/out_of_tree/build.py b/pyodide-build/pyodide_build/out_of_tree/build.py
--- a/pyodide-build/pyodide_build/out_of_tree/build.py
+++ b/pyodide-build/pyodide_build/out_of_tree/build.py
@@ -15,6 +15,9 @@ def run(
cxxflags += f" {os.environ.get('CXXFLAGS', '')}"
ldflags = build_env.get_build_flag("SIDE_MODULE_LDFLAGS")
ldflags += f" {os.environ.get('LDFLAGS', '')}"
+ target_install_dir = os.environ.get(
+ "TARGETINSTALLDIR", build_env.get_build_flag("TARGETINSTALLDIR")
+ )
env = os.environ.copy()
env.update(build_env.get_build_environment_vars())
@@ -24,7 +27,7 @@ def run(
cflags=cflags,
cxxflags=cxxflags,
ldflags=ldflags,
- target_install_dir="",
+ target_install_dir=target_install_dir,
exports=exports,
)
diff --git a/pyodide-build/pyodide_build/pywasmcross.py b/pyodide-build/pyodide_build/pywasmcross.py
--- a/pyodide-build/pyodide_build/pywasmcross.py
+++ b/pyodide-build/pyodide_build/pywasmcross.py
@@ -226,14 +226,19 @@ def replay_genargs_handle_dashI(arg: str, target_install_dir: str) -> str | None
The new argument, or None to delete the argument.
"""
assert arg.startswith("-I")
- if (
- str(Path(arg[2:]).resolve()).startswith(sys.prefix + "/include/python")
- and "site-packages" not in arg
- ):
- return arg.replace("-I" + sys.prefix, "-I" + target_install_dir)
+
# Don't include any system directories
if arg[2:].startswith("/usr"):
return None
+
+ # Replace local Python include paths with the cross compiled ones
+ include_path = str(Path(arg[2:]).resolve())
+ if include_path.startswith(sys.prefix + "/include/python"):
+ return arg.replace("-I" + sys.prefix, "-I" + target_install_dir)
+
+ if include_path.startswith(sys.base_prefix + "/include/python"):
+ return arg.replace("-I" + sys.base_prefix, "-I" + target_install_dir)
+
return arg
| diff --git a/pyodide-build/pyodide_build/tests/test_pywasmcross.py b/pyodide-build/pyodide_build/tests/test_pywasmcross.py
--- a/pyodide-build/pyodide_build/tests/test_pywasmcross.py
+++ b/pyodide-build/pyodide_build/tests/test_pywasmcross.py
@@ -10,6 +10,7 @@
get_library_output,
handle_command_generate_args,
replay_f2c,
+ replay_genargs_handle_dashI,
)
@@ -145,6 +146,30 @@ def test_handle_command_ldflags(build_args):
)
+def test_replay_genargs_handle_dashI(monkeypatch):
+ import sys
+
+ mock_prefix = "/mock_prefix"
+ mock_base_prefix = "/mock_base_prefix"
+ monkeypatch.setattr(sys, "prefix", mock_prefix)
+ monkeypatch.setattr(sys, "base_prefix", mock_base_prefix)
+
+ target_dir = "/target"
+ target_cpython_include = "/target/include/python3.11"
+
+ assert replay_genargs_handle_dashI("-I/usr/include", target_dir) is None
+ assert (
+ replay_genargs_handle_dashI(f"-I{mock_prefix}/include/python3.11", target_dir)
+ == f"-I{target_cpython_include}"
+ )
+ assert (
+ replay_genargs_handle_dashI(
+ f"-I{mock_base_prefix}/include/python3.11", target_dir
+ )
+ == f"-I{target_cpython_include}"
+ )
+
+
def test_f2c():
assert f2c_wrap("gfortran test.f") == "gcc test.c"
assert f2c_wrap("gcc test.c") is None
| Compilation error with out-of-tree builds with 0.24.0a1: LONG_BIT definition appears wrong for platform
## π Bug
Trying to use out-of-tree builds with 0.24.0a1 yields a compilation error: LONG_BIT definition appears wrong for platform
Maybe this is not supposed to work with the alpha version but will work fine with the 0.24 release?
I have found this same error in other places in the issue tracker e.g. https://github.com/pyodide/pyodide/discussions/2186#discussioncomment-2784855 or https://github.com/pyodide/pyodide/issues/775#issuecomment-717527769 but without a clear solution.
<details>
<summary>Excerpt of the build log with the error</summary>
```
building 'sklearn.__check_build._check_build' extension
creating build/temp.emscripten_3_1_44_wasm32-cpython-311/sklearn/__check_build
/tmp/tmp_co55dq1/cc -DNDEBUG -g -fwrapv -O3 -Wall -O2 -g0 -fPIC -DPY_CALL_TRAMPOLINE -DNPY_NO_DEPRECATED_API=NPY_1_7_API_VERSION -I/tmp/build-env-o9s9adj9/include -I/opt/conda/envs/env/include/python3.11 -c sklearn/__check_build/_check_build.c -o build/temp.emscripten_3_1_44_wasm32-cpython-311/sklearn/__check_build/_check_build.o -g0 -O2
In file included from sklearn/__check_build/_check_build.c:35:
In file included from /opt/conda/envs/env/include/python3.11/Python.h:38:
/opt/conda/envs/env/include/python3.11/pyport.h:601:2: error: "LONG_BIT definition appears wrong for platform (bad gcc/glibc config?)."
601 | #error "LONG_BIT definition appears wrong for platform (bad gcc/glibc config?)."
| ^
sklearn/__check_build/_check_build.c:415:39: error: expression is not an integer constant expression
415 | enum { __pyx_check_sizeof_voidp = 1 / (int)(SIZEOF_VOID_P == sizeof(void*)) };
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
sklearn/__check_build/_check_build.c:415:41: note: division by zero
415 | enum { __pyx_check_sizeof_voidp = 1 / (int)(SIZEOF_VOID_P == sizeof(void*)) };
| ^
2 errors generated.
emcc: error: '/emsdk/upstream/bin/clang -target wasm32-unknown-emscripten -fignore-exceptions -fvisibility=default -mllvm -combiner-global-alias-analysis=false -mllvm -enable-emscripten-sjlj -mllvm -disable-lsr -DEMSCRIPTEN -Werror=implicit-function-declaration --sysroot=/emsdk/upstream/emscripten/cache/sysroot -Xclang -iwithsysroot/include/fakesdl -Xclang -iwithsysroot/include/compat -DNDEBUG -g3 -fwrapv -O3 -Wall -O2 -g0 -fPIC -DPY_CALL_TRAMPOLINE -DNPY_NO_DEPRECATED_API=NPY_1_7_API_VERSION -I/tmp/build-env-o9s9adj9/include -I/opt/conda/envs/env/include/python3.11 -c -g0 -O2 -Werror=implicit-function-declaration -Werror=mismatched-parameter-types -Werror=return-type -O2 -g0 -fPIC -I/.pyodide-xbuildenv/xbuildenv/pyodide-root/cpython/installs/python-3.11.3/include/python3.11 -I/.pyodide-xbuildenv/xbuildenv/pyodide-root/cpython/installs/python-3.11.3/include/python3.11 sklearn/__check_build/_check_build.c -o build/temp.emscripten_3_1_44_wasm32-cpython-311/sklearn/__check_build/_check_build.o' failed (returned 1)
error: command '/tmp/tmp_co55dq1/cc' failed with exit code 1
```
</summary>
</details>
### To Reproduce
```sh
# Need set up emscripten 3.1.44 the usual way before
pip install pyodide-build==0.24.0a1
pyodide build scikit-learn
```
### Expected behavior
Builds a Pyodide wheel for scikit-learn
### Environment
- Pyodide Version<!-- (e.g. 1.8.1) -->: 0.24.0a1
- Browser version<!-- (e.g. Chrome 95.0.4638.54) -->:
- Any other relevant information:
### Additional context
With Pyodide 0.24 the scikit-learn test suite fully passes, except some tests due to Pyodide limitations.
I saw the 0.24 release was approaching and I wanted to give a go to start running the scikit-learn tests inside Pyodide and in the scikit-learn CI.
| Thanks for the report @lesteve! It looks like a regression. The same error doesn't occur in 0.23.4?
This works fine with 0.23.4 indeed.
Note this is not specific to scikit-learn, for example you get the same error with simplejson or msgpack. Note that both of these packages have a fall-back to a pure Python implementation which makes the wheel building succeed despite the compilation error.
That's very interesting. I think Pyodide side didn't change very much, so maybe there was a regression between Emscripten 3.1.32 (Pyodide 0.23.4) and 3.1.44 (Pyodide 0.24.0a1).
I tried to reproduce the error with Pyodide 0.24.0a1 and scikit-learn 1.3.0 but failed (it build successfully).
I was also able to build scikit-build main branch with Pyodide 0.24.0a1. So probably this is not an Emscripten issue.
I also tested with both Python 3.11.2 and Python 3.11.3... so I guess the host Python version is not an issue either.
Build log: [out.log](https://github.com/pyodide/pyodide/files/12550211/out.log)
Hmmm indeed I tried in a Debian-based Docker container and it works fine. It doesn't work on my machine which is an ArchLinux-variant. Maybe I should consider myself lucky that it worked until Pyodide 0.23.
This is a little bit annoying but at the same time I can fully understand there are higher priorities for Pyodide ... I'll leave this issue open just in case someone has some insights and I'll close after some time if not.
For reference:
```
β― cat /etc/os-release
NAME="EndeavourOS"
PRETTY_NAME="EndeavourOS"
ID="endeavouros"
ID_LIKE="arch"
BUILD_ID="2023.03.26"
ANSI_COLOR="38;2;23;147;209"
HOME_URL="https://endeavouros.com"
DOCUMENTATION_URL="https://discovery.endeavouros.com"
SUPPORT_URL="https://forum.endeavouros.com"
BUG_REPORT_URL="https://forum.endeavouros.com/c/arch-based-related-questions/bug-reports"
PRIVACY_POLICY_URL="https://endeavouros.com/privacy-policy-2"
LOGO="endeavouros"
```
Yes, let's keep this issue open until we find the root cause.
I looked a bit more, for some reason this happens only in a conda environment. I can build fine on my ArchLinux machine in a Python virtual env (`python -m venv my-venv`) and I reproduce the issue using the `condaforge/miniforge3` Docker image based on Ubuntu 20.04.
I can reproduce the issue in a CI see [scikit-learn PR](https://github.com/scikit-learn/scikit-learn/pull/27346) and [CI build](https://dev.azure.com/scikit-learn/scikit-learn/_build/results?buildId=58949&view=logs&jobId=6fac3219-cc32-5595-eb73-7f086a643b12&j=6fac3219-cc32-5595-eb73-7f086a643b12&t=6856d197-9931-5ad8-f897-5714e4bdfa31).
My hunch is that you get this problem as soon as you are not using your system Python (which is the case for the above CI on Azure or conda environment). I suspect there are things in `pywasmcross.py` like https://github.com/pyodide/pyodide/blob/3b428ceb6b7b9b3abfeab7a75fbf0fac6c9d76bb/pyodide-build/pyodide_build/pywasmcross.py#L234-L236 that removes the system Python includes which make it work in this case (edit: it works in a venv using system python so it may not be as simple as this ...)
If you look at the failing command, you will see first the host Python (`-I/opt/conda/envs/env/include/python3.11`) and then xbuildenv one (`-I/.pyodide-xbuildenv/xbuildenv/pyodide-root/cpython/installs/python-3.11.3/include/python3.11`). This seems in the wrong order since you want first the xbuildenv one unless I am missing something. With some quick and dirty print statement, it seems like the include order is the one I am expecting in Pyodide 0.23.3.
Here is the command with a new line on each `-I` for better readability:
```
emcc: error: '/emsdk/upstream/bin/clang -target wasm32-unknown-emscripten -fignore-exceptions -fvisibility=default -mllvm -combiner-global-alias-analysis=false -mllvm -enable-emscripten-sjlj -mllvm -disable-lsr -DEMSCRIPTEN -Werror=implicit-function-declaration --sysroot=/emsdk/upstream/emscripten/cache/sysroot -Xclang -iwithsysroot/include/fakesdl -Xclang -iwithsysroot/include/compat -DNDEBUG -g3 -fwrapv -O3 -Wall -O2 -g0 -fPIC -DPY_CALL_TRAMPOLINE -DNPY_NO_DEPRECATED_API=NPY_1_7_API_VERSION
-I/tmp/build-env-o9s9adj9/include
-I/opt/conda/envs/env/include/python3.11 -c -g0 -O2 -Werror=implicit-function-declaration -Werror=mismatched-parameter-types -Werror=return-type -O2 -g0 -fPIC
-I/.pyodide-xbuildenv/xbuildenv/pyodide-root/cpython/installs/python-3.11.3/include/python3.11
-I/.pyodide-xbuildenv/xbuildenv/pyodide-root/cpython/installs/python-3.11.3/include/python3.11 sklearn/__check_build/_check_build.c -o build/temp.emscripten_3_1_44_wasm32-cpython-311/sklearn/__check_build/_check_build.o' failed (returned 1)
```
Thanks for the investigation @lesteve! Indeed I changed the order of arguments in #3866.
Probably we should update this part to filter out other local Python include directories as well.
https://github.com/pyodide/pyodide/blob/3b428ceb6b7b9b3abfeab7a75fbf0fac6c9d76bb/pyodide-build/pyodide_build/pywasmcross.py#L234-L236 | 2023-09-13T13:08:27 |
pyodide/pyodide | 4,223 | pyodide__pyodide-4223 | [
"4216"
] | 74bd881129d070fed32a9e3f923d7ffa6b6fbd25 | diff --git a/pyodide-build/pyodide_build/build_env.py b/pyodide-build/pyodide_build/build_env.py
--- a/pyodide-build/pyodide_build/build_env.py
+++ b/pyodide-build/pyodide_build/build_env.py
@@ -62,6 +62,8 @@
"SYSCONFIG_NAME",
"TARGETINSTALLDIR",
"WASM_LIBRARY_DIR",
+ "CMAKE_TOOLCHAIN_FILE",
+ "PYO3_CONFIG_FILE",
}
@@ -169,10 +171,13 @@ def get_build_environment_vars() -> dict[str, str]:
tools_dir = Path(__file__).parent / "tools"
- env["CMAKE_TOOLCHAIN_FILE"] = str(
- tools_dir / "cmake/Modules/Platform/Emscripten.cmake"
- )
- env["PYO3_CONFIG_FILE"] = str(tools_dir / "pyo3_config.ini")
+ if "CMAKE_TOOLCHAIN_FILE" not in env:
+ env["CMAKE_TOOLCHAIN_FILE"] = str(
+ tools_dir / "cmake/Modules/Platform/Emscripten.cmake"
+ )
+
+ if "PYO3_CONFIG_FILE" not in env:
+ env["PYO3_CONFIG_FILE"] = str(tools_dir / "pyo3_config.ini")
hostsitepackages = env["HOSTSITEPACKAGES"]
pythonpath = [
| Emscripten.cmake missing from pyodide-build wheel
## π Bug
It seems like `Emscripten.cmake` is not bundled with `pyodide-build` wheels any more (since version 0.24.0) and I wonder whether this is a bug or an intended change. (I haven't found anything in the changelog). We were relying on the presence of the toolchain file over there in https://github.com/igraph/python-igraph/ to build a wasm wheel in CI and it is not working any more.
### To Reproduce
See, e.g., https://github.com/igraph/python-igraph/actions/runs/6459502630/job/17535393423
### Expected behavior
`pyodide-build` wheels should keep on bundling `Emscripten.cmake` so we can simply `pip install pyodide-build` in CI and then build `python-igraph`'s wasm wheels as before.
### Environment
- Pyodide Version<!-- (e.g. 1.8.1) -->: 0.24.1
- Browser version<!-- (e.g. Chrome 95.0.4638.54) -->: not applicable
| Thanks for the report. Change due to https://github.com/pyodide/pyodide/pull/3934 @ryanking13 would know more about it.
Thanks for the feedback! In the meanwhile, we worked around the issue by adding the missing file during the CI build like this:
https://github.com/igraph/python-igraph/commit/e7903df985a81e166866ea1eeb91d191bfc83136#diff-5a273ce7e6a1e1cb40e78fcc5f3ea2a1175ffe117d9768b33ca4cafec1e460f1
I'm pretty sure that the change was not intentional, though, as `pyodide-build` still keeps on looking for the toolchain file while setting up the environment (and it even overrides `CMAKE_TOOLCHAIN_FILE` from the external environment so I cannot overrule `pyodide-build`'s decision): https://github.com/pyodide/pyodide/blob/main/pyodide-build/pyodide_build/build_env.py#L172
Oh... thanks for catching this. It was totally not intentional.
> (and it even overrides CMAKE_TOOLCHAIN_FILE from the external environment so I cannot overrule pyodide-build's decision)
I think we need to fix this too, so that if a user provides CMAKE_TOOLCHAIN_FILE, it should not be overridden.
| 2023-10-14T06:51:28 |
|
pyodide/pyodide | 4,269 | pyodide__pyodide-4269 | [
"4148"
] | 9c498a8d4f80f30837b47c5bb8e3d3c9e777884f | diff --git a/pyodide-build/pyodide_build/create_xbuildenv.py b/pyodide-build/pyodide_build/create_xbuildenv.py
--- a/pyodide-build/pyodide_build/create_xbuildenv.py
+++ b/pyodide-build/pyodide_build/create_xbuildenv.py
@@ -113,16 +113,6 @@ def create(
_copy_wasm_libs(pyodide_root, xbuildenv_root, skip_missing_files)
(xbuildenv_root / "package.json").write_text("{}")
- res = subprocess.run(
- ["npm", "i", "node-fetch@2"],
- cwd=xbuildenv_root,
- capture_output=True,
- encoding="utf8",
- )
- if res.returncode != 0:
- logger.error("Failed to install node-fetch:")
- exit_with_stdio(res)
-
res = subprocess.run(
["pip", "freeze", "--path", get_build_flag("HOSTSITEPACKAGES")],
capture_output=True,
| diff --git a/src/tests/test_pyodide.py b/src/tests/test_pyodide.py
--- a/src/tests/test_pyodide.py
+++ b/src/tests/test_pyodide.py
@@ -1474,19 +1474,13 @@ def test_args_OO(selenium_standalone_noload):
@pytest.mark.xfail_browsers(chrome="Node only", firefox="Node only", safari="Node only")
def test_relative_index_url(selenium, tmp_path):
tmp_dir = Path(tmp_path)
- version_result = subprocess.run(
- ["node", "-v"], capture_output=True, encoding="utf8"
- )
- extra_node_args = []
- if version_result.stdout.startswith("v14"):
- extra_node_args.append("--experimental-wasm-bigint")
+ subprocess.run(["node", "-v"], capture_output=True, encoding="utf8")
shutil.copy(ROOT_PATH / "dist/pyodide.js", tmp_dir / "pyodide.js")
result = subprocess.run(
[
"node",
- *extra_node_args,
"-e",
rf"""
const loadPyodide = require("{tmp_dir / "pyodide.js"}").loadPyodide;
@@ -1529,8 +1523,6 @@ def test_index_url_calculation_source_map(selenium):
node_options = ["--enable-source-maps"]
result = subprocess.run(["node", "-v"], capture_output=True, encoding="utf8")
- if result.stdout.startswith("v14"):
- node_options.append("--experimental-wasm-bigint")
DIST_DIR = str(Path.cwd() / "dist")
| [Discussion] Drop support for Node versions that passed endoflife
## Proposal
Drop support for Node < 18 from the next Pyodide major release (0.25.0)
## Background
The endoflife of [Node.js 14 and 16 passed a few days ago](https://nodejs.dev/en/about/releases/). We just finished a major release, so I think this is a good time to talk about Node version support.

Here are the things that are currently associated with Node.js versions in Pyodide.
- The documentation mentions [how to run Pyodide in Node.js < 0.18](https://pyodide.org/en/stable/usage/index.html#node-js-versions-0-17)
- We have some compat codes for older Node versions
- https://github.com/pyodide/pyodide/pull/4100#discussion_r1308449699
-[compat.ts](https://github.com/pyodide/pyodide/blob/e2c2884e7a9afd90b23849220ee578bd2d523458/src/js/compat.ts#L40)
- JSPI requires Node >= 20
- The Docker image used for Pyodide CI uses Node 20
### Benefits from dropping Node < 18 support
- No more `node-fetch`.
- No more `--experimental-wasm-bigint` flags
- Better `MessageChannel` support (https://github.com/pyodide/pyodide/issues/4006)
- Some useful methods like [`AbortController`](https://developer.mozilla.org/en-US/docs/Web/API/AbortController)
### Drawbacks
- People who were using Node.js < 18 with Pyodide won't like it.
## Migration plan
- Mention minimal Node.js version support in docs
- Remove compat codes for old Node.js versions
- pytest-pyodide?
WDYT?
| +1 thanks for checking. If it's now deprecated, then I agree it makes sense to drop it. And also I think the type of users who use node, are more able to change the version, as opposed to some browser end users who aren't.
We could set [MIN_NODE_VERSION](https://github.com/emscripten-core/emscripten/blob/0be5b609412d5b07f8157528df8e5088dae84858/src/settings.js#L1818) in Emscripten accordingly then. | 2023-10-30T12:03:53 |
pyodide/pyodide | 4,276 | pyodide__pyodide-4276 | [
"3649"
] | 8dc568ffa80c0140918ed2e05748da93b292ed19 | diff --git a/pyodide-build/pyodide_build/_f2c_fixes.py b/pyodide-build/pyodide_build/_f2c_fixes.py
--- a/pyodide-build/pyodide_build/_f2c_fixes.py
+++ b/pyodide-build/pyodide_build/_f2c_fixes.py
@@ -487,12 +487,28 @@ def scipy_fix_cfile(path: str) -> None:
for lib in ["lapack", "blas"]:
if path.endswith(f"cython_{lib}.c"):
- header_path = Path(path).with_name(f"_{lib}_subroutines.h")
+ header_name = f"_{lib}_subroutines.h"
+ header_dir = Path(path).parent
+ header_path = find_header(header_dir, header_name)
+
header_text = header_path.read_text()
header_text = header_text.replace("void F_FUNC", "int F_FUNC")
header_path.write_text(header_text)
+def find_header(source_dir: Path, header_name: str) -> Path:
+ """
+ Find the header file that corresponds to a source file.
+ """
+ while not (header_path := source_dir / header_name).exists():
+ # meson copies the source files into a subdirectory of the build
+ source_dir = source_dir.parent
+ if source_dir == Path("/"):
+ raise RuntimeError(f"Could not find header file {header_name}")
+
+ return header_path
+
+
def scipy_fixes(args: list[str]) -> None:
for arg in args:
if arg.endswith(".c"):
diff --git a/pyodide-build/pyodide_build/build_env.py b/pyodide-build/pyodide_build/build_env.py
--- a/pyodide-build/pyodide_build/build_env.py
+++ b/pyodide-build/pyodide_build/build_env.py
@@ -35,7 +35,6 @@
"HOSTSITEPACKAGES",
"NUMPY_LIB",
"PATH",
- "PKG_CONFIG_PATH",
"PLATFORM_TRIPLET",
"PIP_CONSTRAINT",
"PYMAJOR",
@@ -65,6 +64,7 @@
"CMAKE_TOOLCHAIN_FILE",
"PYO3_CONFIG_FILE",
"MESON_CROSS_FILE",
+ "PKG_CONFIG_LIBDIR",
}
diff --git a/pyodide-build/pyodide_build/pywasmcross.py b/pyodide-build/pyodide_build/pywasmcross.py
--- a/pyodide-build/pyodide_build/pywasmcross.py
+++ b/pyodide-build/pyodide_build/pywasmcross.py
@@ -62,7 +62,7 @@
import shutil
import subprocess
from collections.abc import Iterable, Iterator
-from typing import Literal, NoReturn
+from typing import Literal
@dataclasses.dataclass(eq=False, order=False, kw_only=True)
@@ -165,7 +165,6 @@ def replay_f2c(args: list[str], dryrun: bool = False) -> list[str] | None:
found_source = True
if not found_source:
- print(f"f2c: source not found, skipping: {new_args_str}")
return None
return new_args
@@ -328,8 +327,15 @@ def replay_genargs_handle_argument(arg: str) -> str | None:
'-fno-strict-overflow', # warning: argument unused during compilation
"-mno-sse2", # warning: argument unused during compilation
"-mno-avx2", # warning: argument unused during compilation
+ "-std=legacy", # fortran flag that clang does not support
]:
return None
+
+ if arg.startswith((
+ "-J", # fortran flag that clang does not support
+ )):
+ return None
+
# fmt: on
return arg
@@ -658,7 +664,7 @@ def handle_command_generate_args( # noqa: C901
def handle_command(
line: list[str],
build_args: BuildArgs,
-) -> NoReturn:
+) -> int:
"""Handle a compilation command. Exit with an appropriate exit code when done.
Parameters
@@ -672,11 +678,12 @@ def handle_command(
is_link_cmd = get_library_output(line) is not None
if line[0] == "gfortran":
- if "-dumpversion" in line:
- sys.exit(subprocess.run(line).returncode)
tmp = replay_f2c(line)
if tmp is None:
- sys.exit(0)
+ # No source file, it's a query for information about the compiler. Pretend we're
+ # gfortran by letting gfortran handle it
+ return subprocess.run(line).returncode
+
line = tmp
new_args = handle_command_generate_args(line, build_args, is_link_cmd)
@@ -686,9 +693,8 @@ def handle_command(
scipy_fixes(new_args)
- returncode = subprocess.run(new_args).returncode
-
- sys.exit(returncode)
+ result = subprocess.run(new_args)
+ return result.returncode
def compiler_main():
| diff --git a/pyodide-build/pyodide_build/tests/test_build_env.py b/pyodide-build/pyodide_build/tests/test_build_env.py
--- a/pyodide-build/pyodide_build/tests/test_build_env.py
+++ b/pyodide-build/pyodide_build/tests/test_build_env.py
@@ -60,8 +60,10 @@ def test_in_xbuildenv(self, reset_env_vars, reset_cache):
def test_get_build_environment_vars(self, reset_env_vars, reset_cache):
build_vars = build_env.get_build_environment_vars()
+
+ # extra variables that does not come from Makefile.envs but are added by build_env.py
extra_vars = set(
- ["PYODIDE", "PKG_CONFIG_PATH", "CMAKE_TOOLCHAIN_FILE", "PYO3_CONFIG_FILE"]
+ ["PYODIDE", "MESON_CROSS_FILE", "CMAKE_TOOLCHAIN_FILE", "PYO3_CONFIG_FILE"]
)
for var in build_vars:
@@ -98,18 +100,18 @@ def test_get_build_environment_vars_host_env(
monkeypatch.setenv("HOME", "/home/user")
monkeypatch.setenv("PATH", "/usr/bin:/bin")
- monkeypatch.setenv("PKG_CONFIG_PATH", "/x/y/z:/c/d/e")
+ monkeypatch.setenv("PKG_CONFIG_LIBDIR", "/x/y/z:/c/d/e")
build_env.get_build_environment_vars.cache_clear()
e_host = build_env.get_build_environment_vars()
assert e_host.get("HOME") == os.environ.get("HOME")
assert e_host.get("PATH") == os.environ.get("PATH")
- assert e_host["PKG_CONFIG_PATH"].endswith("/x/y/z:/c/d/e")
+ assert e_host["PKG_CONFIG_LIBDIR"].endswith("/x/y/z:/c/d/e")
assert e_host.get("HOME") != e.get("HOME")
assert e_host.get("PATH") != e.get("PATH")
- assert e_host.get("PKG_CONFIG_PATH") != e.get("PKG_CONFIG_PATH")
+ assert e_host.get("PKG_CONFIG_LIBDIR") != e.get("PKG_CONFIG_LIBDIR")
build_env.get_build_environment_vars.cache_clear()
| WIP: build scipy with meson
Tries to continue @hoodmane 's work in https://github.com/pyodide/pyodide/pull/3037
| So there are a few issues that happen here,
1.
```
Dependency lookup for OpenBLAS with method 'pkgconfig' failed: Pkg-config binary for machine 1 not
found. Giving up.
```
as discussed in the parent issue. pkg-config is installed and setting `export PKG_CONFIG_LIBDIR=$WASM_PKG_CONFIG_PATH` should fix the path, so I don't know why it cannot find it.
2. `WARNING: Unknown CPU family wasm, please report this at https://github.com/mesonbuild/meson/issues/new`
Maybe a false positive, the config looks correct.
3. ` ERROR: Compiler /usr/bin/gfortran can not compile programs.` depending on what one puts in the config of `tools/emscripten.meson.cross`.
Initially, I felt the quickest might be to monkeypatch the [compiler checks in meson](https://github.com/mesonbuild/meson/blob/master/mesonbuild/compilers/fortran.py) but since it's run in a separate process by `meson-python` we can't easily modify that code at runtime, short of installing a modified version of meson.
So we would indeed likely need to make current checks pass with our compiler wrappers, as Hood was suggesting.
I'll likely not continue this PR in the near future. This was mostly so we have a separate PR on scipy with meson, independent from scipy. Feel free to take it over.
BTW, it looks like the last version of scikit-image also migrated to meson. So maybe that would be an easier package to start, since it only has Cython with a little C.
There are too many build systems in the world π
Indeed :) But with distutils deprecations, many of Python scientific packages (or packages with C extensions) will move to meson. So that's one we likely can't avoid. | 2023-11-01T12:28:56 |
pyodide/pyodide | 4,319 | pyodide__pyodide-4319 | [
"4182"
] | 329f9943d2eb254b64bb431603e996a3451081ad | diff --git a/pyodide-build/pyodide_build/buildall.py b/pyodide-build/pyodide_build/buildall.py
--- a/pyodide-build/pyodide_build/buildall.py
+++ b/pyodide-build/pyodide_build/buildall.py
@@ -19,6 +19,7 @@
from time import perf_counter, sleep
from typing import Any
+from packaging.utils import canonicalize_name
from pyodide_lock import PyodideLockSpec
from pyodide_lock.spec import PackageSpec as PackageLockSpec
from pyodide_lock.utils import update_package_sha256
@@ -637,6 +638,8 @@ def generate_packagedata(
) -> dict[str, PackageLockSpec]:
packages: dict[str, PackageLockSpec] = {}
for name, pkg in pkg_map.items():
+ normalized_name = canonicalize_name(name)
+
if not pkg.file_name or pkg.package_type == "static_library":
continue
if not Path(output_dir, pkg.file_name).exists():
@@ -666,10 +669,10 @@ def generate_packagedata(
pkg.meta.package.top_level if pkg.meta.package.top_level else [name]
)
- packages[name.lower()] = pkg_entry
+ packages[normalized_name.lower()] = pkg_entry
if pkg.unvendored_tests:
- packages[name.lower()].unvendored_tests = True
+ packages[normalized_name.lower()].unvendored_tests = True
# Create the test package if necessary
pkg_entry = PackageLockSpec(
@@ -682,7 +685,7 @@ def generate_packagedata(
update_package_sha256(pkg_entry, output_dir / pkg.unvendored_tests.name)
- packages[name.lower() + "-tests"] = pkg_entry
+ packages[normalized_name.lower() + "-tests"] = pkg_entry
# sort packages by name
packages = dict(sorted(packages.items()))
| diff --git a/packages/micropip/test_micropip.py b/packages/micropip/test_micropip.py
--- a/packages/micropip/test_micropip.py
+++ b/packages/micropip/test_micropip.py
@@ -5,6 +5,8 @@
import pytest
from pytest_pyodide import run_in_pyodide, spawn_web_server
+from conftest import package_is_built
+
cpver = f"cp{sys.version_info.major}{sys.version_info.minor}"
@@ -243,3 +245,18 @@ async def run_test(selenium, url, wheel_name):
]
run_test(selenium_standalone_micropip, url, SNOWBALL_WHEEL)
+
+
+def test_install_non_normalized_package(selenium_standalone_micropip):
+ if not package_is_built("ruamel-yaml"):
+ pytest.skip("ruamel.yaml not built")
+
+ selenium = selenium_standalone_micropip
+
+ selenium.run_async(
+ """
+ import micropip
+ await micropip.install("ruamel.yaml")
+ import ruamel.yaml
+ """
+ )
diff --git a/pyodide-build/pyodide_build/tests/test_buildall.py b/pyodide-build/pyodide_build/tests/test_buildall.py
--- a/pyodide-build/pyodide_build/tests/test_buildall.py
+++ b/pyodide-build/pyodide_build/tests/test_buildall.py
@@ -71,14 +71,14 @@ def test_generate_lockfile(tmp_path):
assert package_data.info.platform.startswith("emscripten")
assert set(package_data.packages) == {
- "pkg_1",
- "pkg_1_1",
- "pkg_2",
- "pkg_3",
- "pkg_3_1",
- "libtest_shared",
+ "pkg-1",
+ "pkg-1-1",
+ "pkg-2",
+ "pkg-3",
+ "pkg-3-1",
+ "libtest-shared",
}
- assert package_data.packages["pkg_1"] == PackageSpec(
+ assert package_data.packages["pkg-1"] == PackageSpec(
name="pkg_1",
version="1.0.0",
file_name="pkg_1-1.0.0-py3-none-any.whl",
@@ -89,9 +89,9 @@ def test_generate_lockfile(tmp_path):
sha256=hashes["pkg_1"],
)
- assert package_data.packages["libtest_shared"].package_type == "shared_library"
+ assert package_data.packages["libtest-shared"].package_type == "shared_library"
- sharedlib_imports = package_data.packages["libtest_shared"].imports
+ sharedlib_imports = package_data.packages["libtest-shared"].imports
assert not sharedlib_imports, (
"shared libraries should not have any imports, but got " f"{sharedlib_imports}"
)
diff --git a/src/js/test/unit/common.test.ts b/src/js/test/unit/common.test.ts
new file mode 100644
--- /dev/null
+++ b/src/js/test/unit/common.test.ts
@@ -0,0 +1,71 @@
+import * as chai from "chai";
+import {
+ canonicalizePackageName,
+ uriToPackageData,
+} from "../../packaging-utils";
+
+describe("canonicalizePackageName", () => {
+ it("should return lower case", () => {
+ chai.assert.equal(canonicalizePackageName("ABC"), "abc");
+ });
+ it("should replace -, _, . with -", () => {
+ chai.assert.equal(
+ canonicalizePackageName("pytest-pyodide"),
+ "pytest-pyodide",
+ );
+ chai.assert.equal(canonicalizePackageName("ruamel.yaml"), "ruamel-yaml");
+ chai.assert.equal(
+ canonicalizePackageName("pytest_benchmark"),
+ "pytest-benchmark",
+ );
+ chai.assert.equal(canonicalizePackageName("a_b-c.d"), "a-b-c-d");
+ });
+});
+
+describe("uriToPackageData", () => {
+ it("should return the correct package data if a correct wheel URI is given", () => {
+ const testcases: [
+ string,
+ { name: string; version: string; fileName: string },
+ ][] = [
+ [
+ "https://files.pythonhosted.org/packages/70/8e/0e2d847013cb52cd35b38c009bb167a1a26b2ce6cd6965bf26b47bc0bf44/requests-2.31.0-py3-none-any.whl",
+ {
+ name: "requests",
+ version: "2.31.0",
+ fileName: "requests-2.31.0-py3-none-any.whl",
+ },
+ ],
+ [
+ "https://example.com/srtrain-2.3.0-py3-none-any.whl",
+ {
+ name: "srtrain",
+ version: "2.3.0",
+ fileName: "srtrain-2.3.0-py3-none-any.whl",
+ },
+ ],
+ [
+ "https://test.net/numpy-1.25.2-cp311-cp311-emscripten_3_1_45_wasm32.whl",
+ {
+ name: "numpy",
+ version: "1.25.2",
+ fileName: "numpy-1.25.2-cp311-cp311-emscripten_3_1_45_wasm32.whl",
+ },
+ ],
+ ];
+
+ testcases.forEach((tc) => {
+ const [url, expected] = tc;
+ const pkgData = uriToPackageData(url);
+ chai.assert.equal(pkgData?.name, expected.name);
+ chai.assert.equal(pkgData?.version, expected.version);
+ chai.assert.equal(pkgData?.fileName, expected.fileName);
+ });
+ });
+
+ it("should return undefined if URI is not a valid wheel URI", () => {
+ chai.assert.equal(uriToPackageData("requests"), undefined);
+ chai.assert.equal(uriToPackageData("pyodide-lock"), undefined);
+ chai.assert.equal(uriToPackageData("pytest_benchmark"), undefined);
+ });
+});
diff --git a/src/tests/test_package_loading.py b/src/tests/test_package_loading.py
--- a/src/tests/test_package_loading.py
+++ b/src/tests/test_package_loading.py
@@ -588,3 +588,40 @@ def test_custom_lockfile(selenium_standalone_noload):
)
finally:
custom_lockfile.unlink()
+
+
[email protected](
+ "load_name, normalized_name, real_name",
+ [
+ # TODO: find a better way to test this without relying on the core packages set
+ ("fpcast-test", "fpcast-test", "fpcast-test"),
+ ("fpcast_test", "fpcast-test", "fpcast-test"),
+ ("Jinja2", "jinja2", "Jinja2"),
+ ("jinja2", "jinja2", "Jinja2"),
+ ("pydoc_data", "pydoc-data", "pydoc_data"),
+ ("pydoc-data", "pydoc-data", "pydoc_data"),
+ ],
+)
+def test_normalized_name(selenium_standalone, load_name, normalized_name, real_name):
+ selenium = selenium_standalone
+
+ selenium.run_js(
+ f"""
+ const msgs = [];
+ await pyodide.loadPackage(
+ "{load_name}",
+ {{
+ messageCallback: (msg) => msgs.push(msg),
+ }}
+ )
+
+ const loaded = Object.keys(pyodide.loadedPackages);
+ assert(() => loaded.includes("{real_name}"));
+
+ const loadStartMsgs = msgs.filter((msg) => msg.startsWith("Loading"));
+ const loadEndMsgs = msgs.filter((msg) => msg.startsWith("Loaded"));
+
+ assert(() => loadStartMsgs.some((msg) => msg.includes("{real_name}")));
+ assert(() => loadEndMsgs.some((msg) => msg.includes("{real_name}")));
+ """
+ )
| API Normalize package names in pyodide-lock.json
In https://github.com/pyodide/pyodide-lock/pull/20#issuecomment-1735331697 we discussed that it would be good to normalize the package names. This was also supported by @bollwyvl's comment in https://github.com/pyodide/pyodide-lock/pull/18#discussion_r1336556106 and also what the [PEP 691](https://peps.python.org/pep-0691/) for Simple JSON API recommends.
In the case of pyodide, I imagine that would mean doing,
```py
packages: {
<normalized_name>: {
name : <normalized_name>
depends: [<normalized_names>]
}
}
```
so we would be repeating the normalized name twice once as the dict key, and the second time as the inner name field.
This however means that if say we do `micropip.install('scikit-learn')` (or if it's pulled as a dependency) in the logs we would see `installing scikit_learn` which I personally find strange. Unless we de-normalize the name somehow when displaying (at least replacing `_` with `-`).
For the input `meta.yaml` this souldn't change anything, and we would only normalize the output name.
cc @ryanking13 @joemarshall
| ```
packages: {
<normalized_name>: {
name : <original_name>
depends: [<normalized_names>]
}
}
```
Actually, I was thinking like this, so that a package can be queried using normalized name, but we can still get the original name of a package from the value.
I would be happy with this, and I was thinking about this as well. Would the above work for you @bollwyvl, which means we can still remove normalization in https://github.com/pyodide/pyodide-lock/pull/18#discussion_r1332985361?
Sure, keeping the verbatim `name` sounds good, along with some human-readable text to this effect that makes it all the way into the schema.
It turns out _dict keyed by strings matching {pattern}_, and even _list of strings matching {pattern}_ is... non-trivial to support for _both_ major versions of `pydantic`. As `pyodide` still ships `pydantic 1.10.7`, and going to `2.x` would take a bunch more rust work, it likely makes sense to constrain `pydantic <2` for now. | 2023-12-02T12:01:04 |
pyodide/pyodide | 4,404 | pyodide__pyodide-4404 | [
"4403"
] | e586e7c0f267759ce47cab11dc283b2655953946 | diff --git a/pyodide-build/pyodide_build/buildpkg.py b/pyodide-build/pyodide_build/buildpkg.py
--- a/pyodide-build/pyodide_build/buildpkg.py
+++ b/pyodide-build/pyodide_build/buildpkg.py
@@ -860,9 +860,12 @@ def build_package(
continue_=continue_,
)
- except Exception:
+ except (Exception, KeyboardInterrupt):
success = False
raise
+ except SystemExit as e:
+ success = e.code == 0
+ raise
finally:
t1 = datetime.now()
datestamp = "[{}]".format(t1.strftime("%Y-%m-%d %H:%M:%S"))
| "ERROR: build script failed" but "Succeeded building package"
## π Bug
<!-- A clear and concise description of what the bug is. -->
`pyodide build-recipes libgmp --install` fails (but that's a separate issue).
It correctly shows "ERROR: build script failed".
But then it proceeds with "Succeeded building package libgmp".
```
mpn/sec_pi1_div_r.o mpn/sec_add_1.o mpn/sec_sub_1.o mpn/sec_invert.o mpn/trialdiv.o .libs/libgmp.lax/lt105-remove.o mpn/and_n.o
mpn/andn_n.o mpn/nand_n.o mpn/ior_n.o mpn/iorn_n.o mpn/nior_n.o mpn/xor_n.o mpn/xnor_n.o mpn/copyi.o mpn/copyd.o mpn/zero.o
mpn/sec_tabselect.o mpn/comb_tables.o mpn/add_n_sub_n.o printf/asprintf.o printf/asprntffuns.o printf/doprnt.o printf/doprntf.o
printf/doprnti.o printf/fprintf.o printf/obprintf.o printf/obvprintf.o printf/obprntffuns.o printf/printf.o printf/printffuns.o
printf/snprintf.o printf/snprntffuns.o printf/sprintf.o printf/sprintffuns.o printf/vasprintf.o printf/vfprintf.o
printf/vprintf.o printf/vsnprintf.o printf/vsprintf.o printf/repl-vsnprintf.o scanf/doscan.o scanf/fscanf.o scanf/fscanffuns.o
scanf/scanf.o scanf/sscanf.o scanf/sscanffuns.o scanf/vfscanf.o scanf/vscanf.o scanf/vsscanf.o rand/rand.o rand/randclr.o
rand/randdef.o rand/randiset.o rand/randlc2s.o rand/randlc2x.o rand/randmt.o rand/randmts.o rand/rands.o rand/randsd.o
rand/randsdui.o rand/randbui.o rand/randmui.o
/workspaces/pyodide/emsdk/emsdk/upstream/bin/llvm-ar: error: .libs/libgmp.lax/lt1-abs.o: No such file or directory
make[2]: *** [Makefile:882: libgmp.la] Error 1
make[2]: Leaving directory '/workspaces/pyodide/packages/libgmp/build/libgmp-6.2.1'
make[1]: *** [Makefile:997: all-recursive] Error 1
make[1]: Leaving directory '/workspaces/pyodide/packages/libgmp/build/libgmp-6.2.1'
make: *** [Makefile:787: all] Error 2
emmake: error: 'make -j 1' failed (returned 2)
ERROR: build script failed
emconfigure ./configure \
CFLAGS="-fPIC" \
--disable-dependency-tracking \
--host none \
--disable-shared \
--enable-static \
--prefix=${WASM_LIBRARY_DIR}
emmake make -j ${PYODIDE_JOBS:-3}
emmake make install
[2024-01-21 19:00:26] Succeeded building package libgmp in 251.0 seconds.
```
### To Reproduce
<!-- Minimal code example to reproduce the bug. -->
In the Conda devcontainer configured in #4402, type
`pyodide build-recipes libgmp --install`.
### Expected behavior
<!-- FILL IN -->
Either build without error, or report an error.
### Environment
- Pyodide Version<!-- (e.g. 1.8.1) -->: 0.26.0.dev0
- Browser version<!-- (e.g. Chrome 95.0.4638.54) -->: N/A
<!-- If you are building Pyodide by yourself, please also include these information: -->
- Commit hash of Pyodide git repository: 09f5f5493ef0e0db142d11d6a9df8accb645d59f
- Build environment: Conda devcontainer configured in #4402, running on macOS
### Additional context
<!-- Add any other context about the problem here. -->
| Thanks for the report. I think this bug has been present for the full three years I've been a maintainer of Pyodide and nobody has ever gotten around to fixing it... It is a bit confusing though.
Any hint where to start to look to fix it?
Looks to me that the problem is here:
https://github.com/pyodide/pyodide/blob/main/pyodide-build/pyodide_build/buildpkg.py?plain=1#L863-L873
```py
success = True
try:
_build_package_inner(...)
except Exception:
success = False
raise
finally:
status = "Succeeded" if success else "Failed"
```
`success` is being left as `True` so the generic exception handler is not being executed. The ERROR message is printed here:
https://github.com/pyodide/pyodide/blob/main/pyodide-build/pyodide_build/buildpkg.py?plain=1#L124-L126
```py
logger.error(f"ERROR: {script_name} failed")
logger.error(textwrap.indent(cmd, " "))
exit_with_stdio(result)
```
And `exit_with_stdio` raises `SystemExit` which is not caught by `except Exception`. So we should add handling for `KeyboardInterrupt` and `SystemExit` on line 863. I think the following patch would probably be sufficient:
```patch
--- a/pyodide-build/pyodide_build/buildpkg.py
+++ b/pyodide-build/pyodide_build/buildpkg.py
@@ -860,9 +860,11 @@ def build_package(
continue_=continue_,
)
- except Exception:
+ except (Exception, KeyboardInterrupt):
success = False
raise
+ except SystemExit as e:
+ success = e.code == 0
finally:
t1 = datetime.now()
datestamp = "[{}]".format(t1.strftime("%Y-%m-%d %H:%M:%S"))
```
Though we might consider adding a distinct message for `KeyboardInterrupt` and I'm not sure that anyone ever raises `SystemExit(0)` so it's not clear how important that case is.
I think it needs to re-raise in the `SystemExit` case too | 2024-01-22T03:02:28 |
Subsets and Splits