{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"__importnb__ imports notebooks as modules. Notebooks are reusable as tests, source code, importable modules, and command line utilities.\n",
"\n",
"[](https://mybinder.org/v2/gh/deathbeds/importnb/master?urlpath=lab/tree/readme.ipynb)[](https://importnb.readthedocs.io/en/latest/?badge=latest)\n",
"[](https://travis-ci.org/deathbeds/importnb)[](https://badge.fury.io/py/importnb)[\n",
"](https://anaconda.org/conda-forge/importnb)[\n",
"](https://github.com/deathbeds/importnb/tree/master/src/importnb) [](https://github.com/psf/black)\n",
"\n",
"\n",
"##### Installation\n",
"\n",
" pip install importnb\n",
" \n",
"---\n",
"\n",
" conda install -c conda-forge importnb"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"\n",
"# `importnb` for testing\n",
"\n",
"After `importnb` is installed, [pytest](https://pytest.readthedocs.io/) will discover and import notebooks as tests.\n",
"\n",
" pytest index.ipynb\n",
"\n",
"[`importnb`](https://github.com/deathbeds/importnb) imports notebooks as python modules, it does not compare outputs like [`nbval`](https://github.com/computationalmodelling/nbval). \n",
"\n",
"[`importnb`](https://github.com/deathbeds/importnb) now captures `doctest`s in every __Markdown__ cell & block string expression. The docstrings are tested with the [__--doctest-modules__ flag](https://doc.pytest.org/en/latest/doctest.html).\n",
"\n",
" pytest index.ipynb --doctest-modules\n",
" \n",
"It is recommended to use `importnb` with [__--nbval__](https://github.com/computationalmodelling/nbval) and the __--monotonic__ flag that checks if has notebook has be restarted and re-run.\n",
"\n",
" pytest index.ipynb --nbval --monotonic"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"\n",
"# `importnb` for the commmand line\n",
"\n",
"`importnb` can run notebooks as command line scripts. Any literal variable in the notebook, may be applied as a parameter from the command line.\n",
"\n",
" ipython -m importnb -- index.ipynb --foo \"A new value\"\n",
" "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"\n",
"# `importnb` for Python and IPython\n",
"\n",
"\n",
"It is suggested to execute `importnb-install` to make sure that notebooks for each IPython session.\n",
"\n",
"> Restart and run all or it didn't happen.\n",
"\n",
"`importnb` excels in an interactive environment and if a notebook will __Restart and Run All__ then it may reused as python code. The `Notebook` context manager will allow notebooks _with valid names_ to import with Python.\n",
"\n",
" >>> from importnb import Notebook"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### For brevity"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
" with __import__('importnb').Notebook(): \n",
" import readme"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"> [`importnb.loader`](src/notebooks/loader.ipynb) will find notebooks available anywhere along the [`sys.path`](https://docs.python.org/2/library/sys.html#sys.path)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### or explicity "
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
" from importnb import Notebook\n",
" with Notebook(): \n",
" import readme"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
" foo = 42\n",
" with Notebook(): \n",
" import readme\n",
" if __name__ == '__main__':\n",
" assert readme.foo == 42\n",
" assert readme.__file__.endswith('.ipynb')"
]
},
{
"cell_type": "markdown",
"metadata": {
"nbsphinx-toctree": {}
},
"source": [
"[`importnb` readme](readme.ipynb)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Modules may be reloaded \n",
"\n",
"The context manager is required to `reload` a module."
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
" from importlib import reload\n",
" with Notebook(): __name__ == '__main__' and reload(readme)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Lazy imports\n",
"\n",
"The `lazy` option will delay the evaluation of a module until one of its attributes are accessed the first time."
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
" with Notebook(lazy=True):\n",
" import readme"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Fuzzy File Names"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
" if __name__ == '__main__':\n",
" with Notebook():\n",
" import __a_me\n",
" \n",
" assert __a_me.__file__ == readme.__file__"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Python does not provide a way to import file names starting with numbers of contains special characters. `importnb` installs a fuzzy import logic to import files containing these edge cases.\n",
"\n",
" import __2018__6_01_A_Blog_Post\n",
" \n",
"will find the first file matching `*2018*6?01?A?Blog?Post`. Importing `Untitled314519.ipynb` could be supported with the query below.\n",
"\n",
" import __314519"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Docstring\n",
"\n",
"The first markdown cell will become the module docstring."
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"__importnb__ imports notebooks as modules. Notebooks are reusable as tests, source code, importable modules, and command line utilities.\n"
]
}
],
"source": [
" if __name__ == '__main__':\n",
" print(readme.__doc__.splitlines()[0])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Meaning non-code blocks can be executeb by [doctest]()."
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [],
"source": [
" if __name__ == '__main__':\n",
" __import__('doctest').testmod(readme)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Import notebooks from files\n",
"\n",
"Notebook names may not be valid Python paths. In this case, use `Notebook.from_filename`.\n",
"\n",
" Notebook.from_filename('index.ipynb')\n",
" \n",
"Import under the `__main__` context.\n",
" \n",
" Notebook('__main__').from_filename('index.ipynb')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Parameterize Notebooks\n",
"\n",
"Literal ast statements are converted to notebooks parameters.\n",
"\n",
"In `readme`, `foo` is a parameter because it may be evaluated with ast.literal_val"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [],
"source": [
" if __name__ == '__main__':\n",
" from importnb.parameterize import Parameterize\n",
" f = Parameterize.load(readme.__file__)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The parameterized module is a callable that evaluates with different literal statements."
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [],
"source": [
" if __name__ == '__main__': \n",
" assert callable(f)\n",
" f.__signature__\n",
"\n",
" assert f().foo == 42\n",
" assert f(foo='importnb').foo == 'importnb'"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Run Notebooks from the command line\n",
"\n",
"Run any notebook from the command line with importnb. Any parameterized expressions are available as parameters on the command line.\n",
"\n",
" "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
" !ipython -m importnb -- index.ipynb --foo \"The new value\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Integrations\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### IPython\n",
"\n",
"#### [IPython Extension](src/notebooks/loader.ipynb#IPython-Extensions)\n",
"\n",
"Avoid the use of the context manager using loading importnb as IPython extension.\n",
"\n",
" %load_ext importnb\n",
" \n",
"`%unload_ext importnb` will unload the extension.\n",
"\n",
"#### Default Extension\n",
"\n",
"`importnb` may allow notebooks to import by default with "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
" !importnb-install\n",
" "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"> If you'd like to play with source code on binder then you must execute the command above. Toggle the markdown cell to a code cell and run it."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This extension will install a script into the default IPython profile startup that is called each time an IPython session is created. \n",
"\n",
"Uninstall the extension with `importnb-install`.\n",
"\n",
"##### Run a notebook as a module\n",
"\n",
"When the default extension is loaded any notebook can be run from the command line. After the `importnb` extension is created notebooks can be execute from the command line.\n",
"\n",
" ipython -m readme\n",
" \n",
"In the command line context, `__file__ == sys.arv[0] and __name__ == '__main__'` .\n",
" \n",
"> See the [deploy step in the travis build](https://github.com/deathbeds/importnb/blob/docs/.travis.yml#L19).\n",
"\n",
"##### Parameterizable IPython commands\n",
"\n",
"Installing the IPython extension allows notebooks to be computed from the command. The notebooks are parameterizable from the command line.\n",
"\n",
" ipython -m readme -- --help"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### py.test\n",
"\n",
"`importnb` installs a pytest plugin when it is setup. Any notebook obeying the py.test discovery conventions can be used in to pytest. _This is great because notebooks are generally your first test._\n",
"\n",
" !ipython -m pytest -- src \n",
" \n",
"Will find all the test notebooks and configurations as pytest would any Python file."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Setup\n",
"\n",
"To package notebooks add `recursive-include package_name *.ipynb`"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Developer"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"* [Source Notebooks](src/notebooks/)\n",
"* [Transpiled Python Source](src/importnb/)\n",
"* [Tests](src/importnb/tests)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Format and test the Source Code"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"/Users/tonyfast/anaconda3/lib/python3.7/site-packages/IPython/core/inputsplitter.py:22: DeprecationWarning: IPython.core.inputsplitter is deprecated since IPython 7 in favor of `IPython.core.inputtransformer2`\n",
" DeprecationWarning)\n",
"\u001b]0;IPython: tonyfast/importnb\u0007\u001b[1m============================= test session starts ==============================\u001b[0m\n",
"platform darwin -- Python 3.7.3, pytest-5.1.2, py-1.8.0, pluggy-0.13.0 -- /Users/tonyfast/anaconda3/bin/python\n",
"cachedir: .pytest_cache\n",
"hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/Users/tonyfast/importnb/.hypothesis/examples')\n",
"rootdir: /Users/tonyfast/importnb, inifile: tox.ini\n",
"plugins: hypothesis-4.36.2, nbval-0.9.2, black-0.3.7, pylint-0.14.1, xonsh-0.9.11, importnb-0.5.5\n",
"collected 19 items \u001b[0m\n",
"\n",
"src/importnb/completer.py::importnb.completer \u001b[32mPASSED\u001b[0m\u001b[36m [ 5%]\u001b[0m\n",
"src/importnb/loader.py::importnb.loader.FinderContextManager \u001b[32mPASSED\u001b[0m\u001b[36m [ 10%]\u001b[0m\n",
"src/importnb/utils/export.py::importnb.utils.export \u001b[32mPASSED\u001b[0m\u001b[36m [ 15%]\u001b[0m\n",
"tests/test_importnb.ipynb::test_basic \u001b[32mPASSED\u001b[0m\u001b[36m [ 21%]\u001b[0m\n",
"tests/test_importnb.ipynb::test_package \u001b[32mPASSED\u001b[0m\u001b[36m [ 26%]\u001b[0m\n",
"tests/test_importnb.ipynb::test_reload \u001b[32mPASSED\u001b[0m\u001b[36m [ 31%]\u001b[0m\n",
"tests/test_importnb.ipynb::test_docstrings \u001b[32mPASSED\u001b[0m\u001b[36m [ 36%]\u001b[0m\n",
"tests/test_importnb.ipynb::test_docstring_opts \u001b[32mPASSED\u001b[0m\u001b[36m [ 42%]\u001b[0m\n",
"tests/test_importnb.ipynb::test_from_file \u001b[32mPASSED\u001b[0m\u001b[36m [ 47%]\u001b[0m\n",
"tests/test_importnb.ipynb::test_lazy \u001b[32mPASSED\u001b[0m\u001b[36m [ 52%]\u001b[0m\n",
"tests/test_importnb.ipynb::test_module_source \u001b[32mPASSED\u001b[0m\u001b[36m [ 57%]\u001b[0m\n",
"tests/test_importnb.ipynb::test_main \u001b[32mPASSED\u001b[0m\u001b[36m [ 63%]\u001b[0m\n",
"tests/test_importnb.ipynb::test_object_source \u001b[32mPASSED\u001b[0m\u001b[36m [ 68%]\u001b[0m\n",
"tests/test_importnb.ipynb::test_python_file \u001b[32mPASSED\u001b[0m\u001b[36m [ 73%]\u001b[0m\n",
"tests/test_importnb.ipynb::test_cli \u001b[32mPASSED\u001b[0m\u001b[36m [ 78%]\u001b[0m\n",
"tests/test_importnb.ipynb::test_parameterize \u001b[32mPASSED\u001b[0m\u001b[36m [ 84%]\u001b[0m\n",
"tests/test_importnb.ipynb::test_minified_json \u001b[32mPASSED\u001b[0m\u001b[36m [ 89%]\u001b[0m\n",
"tests/test_importnb.ipynb::test_fuzzy_finder \u001b[32mPASSED\u001b[0m\u001b[36m [ 94%]\u001b[0m\n",
"tests/test_importnb.ipynb::test_remote \u001b[32mPASSED\u001b[0m\u001b[36m [100%]\u001b[0m\n",
"\n",
"\u001b[33m=============================== warnings summary ===============================\u001b[0m\n",
"src/importnb/completer.py::importnb.completer\n",
"src/importnb/completer.py::importnb.completer\n",
"src/importnb/completer.py::importnb.completer\n",
"src/importnb/completer.py::importnb.completer\n",
" /Users/tonyfast/anaconda3/lib/python3.7/site-packages/IPython/core/completer.py:1950: PendingDeprecationWarning: `Completer.complete` is pending deprecation since IPython 6.0 and will be replaced by `Completer.completions`.\n",
" PendingDeprecationWarning)\n",
"\n",
"-- Docs: https://docs.pytest.org/en/latest/warnings.html\n",
"\u001b[33m\u001b[1m======================== 19 passed, 4 warnings in 2.93s ========================\u001b[0m\n",
"[NbConvertApp] Converting notebook index.ipynb to markdown\n"
]
}
],
"source": [
" if __name__ == '__main__':\n",
" if globals().get('__file__', None) == __import__('sys').argv[0]:\n",
" print(foo, __import__('sys').argv)\n",
" else:\n",
" from subprocess import call\n",
" !ipython -m pytest\n",
" \"\"\"Formatting\"\"\"\n",
" from pathlib import Path\n",
" from importnb.utils.export import export\n",
" root = 'src/importnb/notebooks/'\n",
" for path in Path(root).rglob(\"\"\"*.ipynb\"\"\"): \n",
" if 'checkpoint' not in str(path):\n",
" export(path, Path('src/importnb') / path.with_suffix('.py').relative_to(root))\n",
" !jupyter nbconvert --to markdown --stdout index.ipynb > readme.md\n",
" "
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"
"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
" if __name__ == '__main__':\n",
" try:\n",
" from IPython.display import display, Image\n",
" from IPython.utils.capture import capture_output\n",
" from IPython import get_ipython\n",
" with capture_output(): \n",
" get_ipython().system(\"cd docs && pyreverse importnb -opng -pimportnb\")\n",
" display(Image(url='docs/classes_importnb.png', ))\n",
" except: ..."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python [conda env:root] *",
"language": "python",
"name": "conda-root-py"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.3"
}
},
"nbformat": 4,
"nbformat_minor": 4
}