You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+36-23Lines changed: 36 additions & 23 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -11,61 +11,74 @@ Documentation is written using [mkdocs](https://www.mkdocs.org/) and themed with
11
11
12
12
Each tool gets a `nav` section in `mkdocs.yml`, which maps to its own section/tab in the rendered documentation. So to add a new page, change titles, or change structure, edit `mkdocs.yml`. To edit the documentation itself, edit the `.md` documentation files in the subfolders under `/docs`.
You'll also need to install the packages being documented (peppy, looper, pipestat, pypiper, geofetch, eido, yacman) for the API documentation to build correctly:
I recommend previewing your changes locally before deploying. You can get a hot-reload server going by cloning this repository, and then just running:
24
30
25
-
```
31
+
```bash
26
32
mkdocs serve
27
33
```
28
34
29
35
You can also use `mkdocs build` to build a portable local version of the docs.
30
36
37
+
The documentation now uses **mkdocstrings** for Python API documentation and **mkdocs-jupyter** for Jupyter notebooks. These plugins automatically generate documentation from the source code and render notebooks, so the build process is now a single step.
38
+
31
39
32
40
### Publishing updates
33
41
34
42
The documentation is published automatically upon commits to `master` using a GitHub Action, which runs `mkdocs gh-deploy`. This builds the docs, and pushes them to the `gh-pages` branch. This branch is then published with GitHub Pages. There's no need to do this locally, just let the action deploy the updates for you automatically.
35
43
36
44
## FAQ
37
45
46
+
### Python API Documentation
38
47
39
-
### Updating automatic documentation
40
-
41
-
In the past, I had a plugin that would auto-document 2 things: 1. Python docs using lucidoc, and 2. Jupyter notebooks. This plugin was neat, but it caused me a lot of maintenance issues as well. So now, I've made it much simpler; now it's no longer a plugin, just a simple Python script. Update all the auto-generated docs (stored in `docs/autodoc_build`) by running the update script manually:
48
+
Python API documentation is now automatically generated using **mkdocstrings** during the build process. No separate script is needed. The API docs are defined in markdown files (e.g., `docs/peppy/code/python-api.md`) using the `:::` syntax:
42
49
43
-
```console
44
-
python autodoc.py
50
+
```markdown
51
+
::: peppy.Project
52
+
options:
53
+
docstring_style: google
54
+
show_source: true
45
55
```
46
56
47
-
#### Configuring lucidoc rendering
57
+
This syntax tells mkdocstrings to extract and render the documentation for the specified class or function directly from the source code.
48
58
49
-
Auto-generated Python documentation with `lucidoc` rendering is configured in the `lucidoc` sections of `mkdocs.yml`.
59
+
### Jupyter Notebooks
60
+
61
+
Jupyter notebooks are now rendered automatically using the **mkdocs-jupyter** plugin. Configure which notebooks to include in the `plugins` section of `mkdocs.yml`:
50
62
51
63
```yaml
52
-
lucidoc:
53
-
peppy: path/to/output.md
64
+
plugins:
65
+
- mkdocs-jupyter:
66
+
include:
67
+
- peppy/notebooks/*.ipynb
68
+
- looper/notebooks/*.ipynb
54
69
```
55
70
56
-
#### Configuring jupyter rendering
71
+
Notebooks are rendered directly from `.ipynb` files during the build - no conversion step is needed.
57
72
58
-
Configure jupyter notebeeoks in the `jupyter` section, where you specify a list of `in` (for `.ipynb` files) and `out` (for `.md` files) locations.
73
+
### CLI Usage Documentation
59
74
60
-
```yaml
61
-
jupyter:
62
-
- in: path/to/notebook_folder1
63
-
out: path/to/rendered_folder1
64
-
- in: path/to/notebook_folder2
65
-
out: path/to/rendered_folder2
66
-
```
67
-
68
-
There, you can specify which folders contain notebooks, and to where they should be rendered as markdown.
75
+
CLI usage documentation for geofetch can be updated manually when needed using the helper script:
76
+
77
+
```bash
78
+
python scripts/generate_cli_usage_docs.py
79
+
```
80
+
81
+
This script reads the template at `docs/geofetch/usage-template.md.tpl` and runs `geofetch --help` to generate `docs/geofetch/code/usage.md`. This only needs to be run when the CLI interface changes.
Any PEP should validate against that schema, which describes generic PEP format. We can go one step further and validate it against the PEPPRO schema, which describes Proseq projects specfically for this pipeline:
83
+
Any PEP should validate against that schema, which describes generic PEP format. We can go one step further and validate it against the PEPPRO schema, which describes Proseq projects specifically for this pipeline:
84
84
85
85
86
86
```bash
@@ -144,7 +144,7 @@ eido validate -h
144
144
145
145
Let's use `eido convert` command to convert PEPs to a variety of different formats. `eido` supports a plugin system, which can be used by other tool developers to create Python plugin functions that save PEPs in a desired format. Please refer to the documentation for more details. For now let's focus on a couple of plugins that are built-in in `eido`.
146
146
147
-
To see what plugins are currently avaialable in your Python environment call:
147
+
To see what plugins are currently available in your Python environment call:
Copy file name to clipboardExpand all lines: docs/eido/code/demo.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -129,7 +129,7 @@ required:
129
129
- samples
130
130
```
131
131
132
-
PEPs to succesfully validate against this schema will need to fulfill all the generic PEP2.0.0 schema requirements _and_ fulfill the new `my_numeric_attribute` requirement.
132
+
PEPs to successfully validate against this schema will need to fulfill all the generic PEP2.0.0 schema requirements _and_ fulfill the new `my_numeric_attribute` requirement.
Similarily, the config part of the PEP can be validated; the function inputs remain the same
309
+
Similarly, the config part of the PEP can be validated; the function inputs remain the same
310
310
311
311
312
312
```python
@@ -326,7 +326,7 @@ validate_sample(
326
326
327
327
## Output details
328
328
329
-
As depicted above the error raised by the `jsonschema` package is very detailed. That's because the entire validated PEP is printed out for the user reference. Since it can get overwhelming in case of the multi sample PEPs each of the `eido` functions presented above privide a way to limit the output to just the general information indicating the unmet schema requirements
329
+
As depicted above the error raised by the `jsonschema` package is very detailed. That's because the entire validated PEP is printed out for the user reference. Since it can get overwhelming in case of the multi sample PEPs each of the `eido` functions presented above provide a way to limit the output to just the general information indicating the unmet schema requirements
Eido provides built-in filter functions that can transform PEP projects into different output formats. These filters are useful for converting PEPs to various representations like YAML, CSV, or other formats.
28
6
7
+
### Available Filters
29
8
30
-
# Package `eido` Documentation
9
+
Eido includes several built-in filters for converting and exporting PEP data:
31
10
11
+
-**basic_pep_filter**: Returns the basic PEP representation
12
+
-**yaml_pep_filter**: Converts PEP to YAML format
13
+
-**csv_pep_filter**: Exports sample tables as CSV
14
+
-**yaml_samples_pep_filter**: Exports only sample data as YAML
0 commit comments