Skip to content

Commit 2996e0d

Browse files
committed
adding class examples
1 parent 3bf0dcf commit 2996e0d

3 files changed

Lines changed: 204 additions & 0 deletions

File tree

mmNapari/readme.md

Lines changed: 27 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,27 @@
1+
2+
## Simple example
3+
4+
## install napari from source
5+
6+
```
7+
git clone https://github.com/napari/napari.git
8+
```
9+
10+
assuming you are already in a mm_env python virtual environment
11+
12+
# either
13+
```
14+
python -m pip install "napari[all]"
15+
```
16+
17+
# or
18+
```
19+
pip install -e napari/.
20+
```
21+
22+
Test that napari runs
23+
24+
```
25+
napari
26+
```
27+

mmNapari/simpleExample.py

Lines changed: 91 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,91 @@
1+
"""
2+
Date: 20220109
3+
Author: Robert Cudmore
4+
5+
Example script to load a mmMap and display the first timepoint mmStack with:
6+
- 3D image
7+
- 3D point annotations
8+
- 3D line tracings
9+
10+
To run locally, you need to specify the full path to the map in mapPath
11+
12+
"""
13+
14+
import numpy as np
15+
import napari
16+
import pymapmanager
17+
18+
mapPath = '/media/cudmore/data/richard/rr30a/rr30a.txt'
19+
20+
def run():
21+
# load a mmMap
22+
myMap = pymapmanager.mmMap(mapPath)
23+
24+
# get info on loaded map
25+
print(myMap)
26+
#print(myMap.mapInfo()) # TODO: make more informative
27+
28+
# get the real-world x/y scale of each image in micrometers (um)
29+
# x/y resolution is usually the same, z is always unitless and corrsponds to simply image number
30+
mapInfo = myMap.mapInfo()
31+
dx = mapInfo['dx'][0] # x-resolution (um) of the first mmStack
32+
dy = mapInfo['dy'][0]
33+
print ('x resolution in um is: dx', dx, type(dx)) # crap, it is a string
34+
print ('y resolution in um is: dy', dy, type(dy)) # crap, it is a string
35+
dx = float(dx)
36+
dy = float(dy)
37+
38+
aStack = myMap.stacks[0] # grab the first mmStack
39+
40+
# CRITICAL
41+
aStack.loadStackImages() # CRITICAL: until calling this, images are not loaded
42+
43+
# grab the nd-image
44+
oneImageVolume = aStack.images
45+
46+
# open napari viewer with the image volume and specify the (z, x, y) scale
47+
viewer = napari.view_image(oneImageVolume, scale=(1, dx, dy))
48+
49+
# pull 3d point annotations
50+
df = aStack.stackdb # pandas DataFrame with x/y/z and roiType columns (lots more columns)
51+
spines = df[df['roiType'].isin(['spineROI'])]
52+
53+
# the um x/y/z position of each spine ROI annotation
54+
x = spines['x'].values
55+
y = spines['y'].values
56+
z = spines['z'].values
57+
58+
# package x/y/z into points for napari
59+
# Note order here [z, y, x], I usually think of order as [z, x, y]
60+
arrays = [z, y, x]
61+
points = np.stack(arrays, axis=1)
62+
print('points:', points.shape) # check the shape of the points, napari wants annotations in rows and then x/y/z in columns
63+
64+
#
65+
# create a points layer with our spineROI point annotations
66+
size = 2 # the size of the point displayed in napari
67+
points_layer = viewer.add_points(points, size=size, face_color='r')
68+
69+
#
70+
# load line/segment tracings from a mmStack
71+
xyzLine = aStack.line.getLine() #this returns a 2d numpy array with columns of (x,y,z)
72+
xLine = xyzLine[:,0]
73+
yLine = xyzLine[:,1]
74+
zLine = xyzLine[:,2]
75+
76+
# package x/y/z into points for napari
77+
# Note order here [z, y, x], I usually think of order as [z, x, y]
78+
arrays = [zLine, yLine, xLine]
79+
linePoints = np.stack(arrays, axis=1)
80+
81+
#
82+
# create a points layer with our line segments
83+
size = 2
84+
line_points_layer = viewer.add_points(linePoints, size=size, face_color='c')
85+
86+
#
87+
# typical of any kind of GUI interface, we need to enter into a loop so the viewer stays up
88+
napari.run() # start the "event loop" and show the viewer
89+
90+
if __name__ == '__main__':
91+
run()

readme-class-heirarchy.md

Lines changed: 86 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,86 @@
1+
2+
## First steps
3+
4+
- Implement a GUI to display a single timepoint stack in Napari
5+
- Once a mmMap is loaded, you can display mmMap.stack[0] in napari
6+
- This includes:
7+
- raw 3D image data
8+
- Display all spineROI annotations in the Napari viewer as a 'points' layer
9+
- Add an interface elements to napari including
10+
- list of spineROI in the map
11+
- When user clicks on an annotation in the list, snap to and highlight that annotation in the image/point layers
12+
- When a user click on an annotation in the image, snap to and highlight that annotation in the list
13+
14+
## mmStack
15+
16+
see the current api here: https://pymapmanager.readthedocs.io/en/latest/source/PyMapManager.html#module-pymapmanager.mmStack
17+
18+
The mmStack class encapsulates an nd-image and a list of annotations.
19+
20+
nd-image:
21+
- Load
22+
- Retreive single image planes
23+
24+
Annotations
25+
26+
We have a number of different types of annotations. Basically each annotation corresponds to an identifiable feature in the nd-image. Each annotation has a 'real-world' description of the type of structure it is referng to.
27+
28+
We need to implement a CRUD interface for all annotations (Create-Read-Update-Delete).
29+
30+
point annotations
31+
32+
These are stored as a Pandas Dataframe, one row per annotation with a number of columns specifying the features of each annotation.
33+
34+
Here are some examples of columns (features) in a point annotation:
35+
36+
x (int): X pixel in the nd-image
37+
y (int): Y pixel in the nd-image
38+
z (int): Z pixel in the nd-image (for a 3D stack, z corresponds to the image plane)
39+
note (str): A user specified note
40+
cDate (int): Creation date encoded as python time.time() seconds (e.g. linux epoch)
41+
mDate (int): Creation date encoded as python time.time() seconds (e.g. linux epoch)
42+
roiType (Enum): The 'real world' structure of the annotation. For example ['globalPIvot' 'pivotPnt' 'controlPnt' 'spineROI']
43+
44+
Example algorithms we use on different roiType:
45+
46+
roiType of 'controlPnt' is a sequential set of points specified by the user that coursely traces along a bright filament (dendrite or axon). We then use to algorithmically fit the brightest path between these points (see line annotation).
47+
48+
roiType of 'spineROI' represent the location of a neuornal spine. We then algorithmically draw a number of ROIs (NOT IMPLEMENTED) to calculate the parameters of the intensity of the image around this point. This algorithm generates a number of new 'features' of the ROI, each is then added as a column in the Pandas DataFrame for this ROI.
49+
50+
line annotations
51+
52+
Line annotations are basically a list of point annotations with some bookkeeping. To allow mulutiple disjoint but connected lines within an mmStack, we assign each point to a segmentID and use a 'prevID' to keep track of points within a given line segment and when they change into a new branch.
53+
54+
To organize line segments into a 'tree' like structure we use the simple text file format called SWC (see: https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0228091).
55+
56+
Basically, each point in the line has (x,y,z) coordinates as well as a 'previous' ID. The start of a tracing has previous=-1. Each sequential point along a line has 'previous' equal to its index-1. When there is a new line previous != index-1 but is some other point in the tracing. This is a really poor description of this simple format, can explain more later.
57+
58+
The user specifies a sparse set of 'controlPnt' annotations along a bright filament and we then compute the brightest path between these points. This path finding is not yet implemented in Python. We could use the Fiji Simple Neurite Tracer plugin to find the brightest path. I have scripts to trigger Fiji with the sparse set of controlPointROI and return a fine grained path of (x,y,z). We might want to implement this kind of brightest path algorithm (generally an A* algorithm)
59+
60+
Fiji/ImageJ is open source image visualization and analysis used by many: https://imagej.net/software/fiji/
61+
62+
Here are some examples of columns (features) in a line annotation:
63+
64+
x (int): X pixel in the nd-image
65+
y (int): Y pixel in the nd-image
66+
z (int): Z pixel in the nd-image (for a 3D stack, z corresponds to the image)
67+
radius (float): If the line is tracing a filament like structure in an image, we calculate the radius/diameter with some analysis.
68+
segmentID (int): Each point in a line belongs to a segment (0,1,2,...)
69+
prev (int) Each point has a prev index, for point i:
70+
case: prev = -1 then this point is the root of the tree
71+
case: prev = i-1 then it is a continuation of a line.
72+
case: prev != i-1 then this point connects to some other segment
73+
74+
75+
roi annotations
76+
We don't have these yet. ROI annotations will hold a list of points to include in the region-of-interest (ROI). This could hold an ROI you might create by drawiing an arbitrary shape on top of an image (e.g. with a lasso tool)
77+
78+
79+
We need API functions to retreive, add, delete, and edit annotation (e.g. CRUD)
80+
81+
82+
## Definitions:
83+
84+
image plane: When we have a 3D nd-image made of individual image slices [i,:,:]
85+
86+
image slice: See image plane

0 commit comments

Comments
 (0)