Clquant3 10manual
Clquant3 10manual
7 (2/2)
CL-Quant Ver.3.10
Instructions
Thank you very much for choosing Nikon.
This manual is prepared for users of the CL-Quant Ver.3.10 analysis software.
For trouble-free operation, read this manual before using the program.
No part of this manual may be reproduced or transmitted in any form without Nikon’s permission.
The specifications of this software and the content of this manual are subject to change without
notice.
Every effort has been made to ensure the accuracy of this manual. If you find that any portion of
this manual is unclear or incorrect, please contact your nearest Nikon representative.
Trademarks:
i
Table of Contents
Chapter 1 Getting Started with CL-Quant ........................................................................................ 1
Introduction ................................................................................................................................. 2
1.1 Installing the Software............................................................................................................ 3
1.2 Usage Guidelines .................................................................................................................. 4
1.3 CL-Quant Basics.................................................................................................................... 6
1.3.1 Window Layout ................................................................................................................. 6
1.3.2 File Types ......................................................................................................................... 7
1.3.3 Data Objects ..................................................................................................................... 7
1.3.4 Definitions......................................................................................................................... 9
1.3.5 Hot Keys ......................................................................................................................... 11
1.4 CL-Quant Preferences ......................................................................................................... 13
1.4.1 General Options ............................................................................................................. 13
1.4.2 Graphics and Log ........................................................................................................... 14
1.4.3 Advanced Options .......................................................................................................... 15
1.4.4 Recipes and Procedures Options ................................................................................... 16
ii
Table of Contents
iii
Table of Contents
iv
Chapter 1
Getting Started
with CL-Quant
Chapter 1 Getting Started with CL-Quant
Introduction
Introduction
Welcome to Nikon’s CL-Quant 3.10 Analysis Software. CL-Quant provides everything you need
to create high performance microscopy image recognition analyses, on par with custom written
code, for your imaging experiment. Whether you are performing image based assays, or
developing new ones, CL-Quant has the tools to get great results.
Learning CL-Quant
The software can be used in two primary modes: application execution mode and application
teaching mode. This guide is divided into three main chapters accordingly.
2
Chapter 1 Getting Started with CL-Quant
1.1 Installing the Software
The software must run on a Windows PC running Microsoft .NET version 4.0 or greater.
If you have a new Windows PC, it probably already has .NET installed. Otherwise you can
download it freely over the Internet from Microsoft, or from technical support. The CL-Quant
installer will automatically detect if .NET is not installed, and will provide you with a link to
download the free installer.
Once you have installed .NET, you can install CL-Quant. Double click on the executable
provided to you, and follow the installation instructions.
NOTE:
- CL-Quant is available only in 64-bit version. Make sure you have the appropriate version
for your PC.
The software has been tested and characterized on platforms similar to the following:
Recommended Platform
3
Chapter 1 Getting Started with CL-Quant
1.2 Usage Guidelines
The following usage guidelines parallel how the software is tested. The guidelines are size
limitations for the different types of usage (e.g. processing, displaying, teaching, etc.) that can
be done with multi-dimensional data sets in the software. The limitations assume you have a PC
configuration that meets or exceeds the recommended platform specifications. Please see the
discussion session at the end for a detailed explanation.
Discussion
Limitations are presented in terms of multi-dimensional image dimensions as well as total limits.
For example, processing is limited in x,y to 400 MB (20,000 x 20,000 8-bit pixels) while display
can support up to 400 MB (20,000 x 20,000 pixels). Total limits are defined in terms of their
relevant dimensions. For example, total examine limits are a function of x,y,c (channel), and m
(position or FOV), while total processing limit does not include the m dimension. This is
because examination functions (i.e. LUT, threshold overlay, mask overlay, etc.) can be applied
to multiple FOVs simultaneously, whereas processing (i.e. procedure execution) is done one
FOV at a time.
4
Chapter 1 Getting Started with CL-Quant
1.2 Usage Guidelines
CL-Quant’s stitching function is only supported through Nikon’s BioStation CT CSV format.
Simply load the CSV file, and the stitching is done automatically.
“Load and display” covers loading a list of FOVs, and navigating (zoom, animate in time or Z)
them, but without applying examination functions (listed in the examination menu).
5
Chapter 1 Getting Started with CL-Quant
1.3 CL-Quant Basics >> 1.3.1 Window Layout
When you display procedures and FOVs they will appear in RecognitionFrames (RFrames).
RFrames are the primary graphical interface in the software. RFrames are described in Section
2.1.2, “RecognitionFrame”.
In this section, we describe the supported data objects and file types. Additional information
about how to use these features for tasks is presented in Chapter 2, “Using CL-Quant” and
Chapter 3, “Teaching CL-Quant”.
6
Chapter 1 Getting Started with CL-Quant
1.3 CL-Quant Basics >> 1.3.2 File Types
- images,
- segmentation masks,
- field measurements,
- object measurements,
The FOV objects may also have multiple subset or class memberships. Initially an FOV
contains sets of images, aligned in x,y and arrayed in up to three dimensions; time, channel
and z.
7
Chapter 1 Getting Started with CL-Quant
1.3 CL-Quant Basics >> 1.3.3 Data Objects
CL-Quant currently supports 5 dimensional imaging (x, y, z, time and channel). When a set of
FOVs is loaded into an FOV list, a positional dimension is added, which enables users to
navigate through multiple FOV positions (“M” dimension).
FOV List
FOVs can be organized into an FOV list. This can be done via drag and drop in the Data
Explorer. Also, when sets of FOVs are loaded together they will be organized into an FOV List.
All the FOVs in the list are presented together in the RFrame. The List enables batch
procedure execution and group based data analysis. Therefore, you should consider loading
FOVs from the same type of experiment together into a single FOV list. This will facilitate
batch processing and integrated data analysis. It is less convenient to process and analyze
data across RFrames.
(2) Procedures
Procedures encode the processing rules for analyzing FOVs. Procedures enable the software
to be general purpose (because they can be taught for any application), and yet also provide
“one-click” application execution and automation for high volume application.
We strive to make creation of procedures (what we call procedure teaching) easy and flexible,
so that any scientist can create powerful and high performance analyses for their application,
on par with custom written code.
- Import procedure:
Automates the importing of images on file into collections of FOVs.
- Enhancement procedure:
Automates traditional image and binary mask operations and common actions.
- Segmentation procedure:
Automates the innovative, machine-learning based confidence mapping.
- Measurement procedure:
Automates the calculation of field and objects measurements.
- Decision procedure:
Automates the application of innovative, machine-learning based object classifiers which
assign FOV objects into non exclusive classes (e.g. live cell, dead cell, responder, non
responder, artifact, etc.).
8
Chapter 1 Getting Started with CL-Quant
1.3 CL-Quant Basics >> 1.3.4 Definitions
- Tracking procedure:
Automates the tracking of objects in movies and calculation of kinetic measurements.
Your ability to create procedures is controlled by the configuration of the software you
purchased.
Procedure List
Procedures can be grouped, saved and reloaded as a procedure list file. The procedure list
provides a way to group and order a set of procedures for consecutive execution. You may
have been provided with a procedure list for your application when you purchased the
software.
Recipes
Functionally equivalent to a procedure list, a recipe is a finalized collection of procedures.
Unlike a procedure list, recipe update can only be done in the Recipe Console.
1.3.4 Definitions
BMP (Bit-Map) Image file format that separately stores the Red, Green and Blue
components of each pixel in the corresponding image. Images of this type
usually have a file suffix of “.bmp”.
Decision Module The interface that provides the tools necessary to teach and apply a
Probabilistic Classification Tree to a series of input data or images. This
mechanism allows the user to label objects and teach the software so
that it can differentiate between objects considered to be from different
categories or classes.
Enhancement The interface that provides the ability to enhance an image using general
Module purpose image processing functions such as Add, Subtract, Convolve,
Erode, Dilate, etc.
9
Chapter 1 Getting Started with CL-Quant
1.3 CL-Quant Basics >> 1.3.4 Definitions
Import The process by which images or movies are loaded into the software in
preparation for processing or viewing. Import can be automated with
import procedures.
Movie An FOV whose channels have temporal extent, meaning a series of x,y
aligned image frames taken at regular time intervals. The movie, for
example, may be a time series taken of live cells as they move about on a
substrate.
TIFF Image file format originally created by the company Aldus, jointly with
(Tagged Image File Microsoft, for use with PostScript printing, TIFF is a popular format for
Format) high color depth images, along with JPEG and PNG.
10
Chapter 1 Getting Started with CL-Quant
1.3 CL-Quant Basics >> 1.3.5 Hot Keys
11
Chapter 1 Getting Started with CL-Quant
1.3 CL-Quant Basics >> 1.3.5 Hot Keys
12
Chapter 1 Getting Started with CL-Quant
1.4 CL-Quant Preferences >> 1.4.1 General Options
CL-Quant preferences are options that can be configured to give you important control over
CL-Quant performance and usability.
To access the Options window, select “File > Options” from the “File” menu. At the top level
are buttons to reset, apply or cancel your changes. The reset button will restore the original
settings from when you first installed CL-Quant.
None If this is selected, no files will be displayed. They will appear as FOV data objects in the
data explorer.
Ask If selected, the user is prompted to choose between displaying all and displaying none.
Stitching Options
The software provides automated stitching of Nikon BioStation CSV format composite images
in three modes:
Fixed CSV file meta-data, which indicates the expected frame overlap, is used to stitch the
mosaic image without automatic alignment.
User In this mode you can specify the stitched image overlap. When loading, a window
will appear in which you can type the desired overlap.
Automatic The software will automatically align the input stitching images to create the stitched
composite. For multi-channel inputs, the first channel (Channel 0) alignment will be
used for all channels.
Splash screen
If checked, the splash screen will be displayed when the software is launched.
Confirm exit
If checked, the software will prompt the user to confirm that he wishes to exit the software.
13
Chapter 1 Getting Started with CL-Quant
1.4 CL-Quant Preferences >> 1.4.2 Graphics and Log
(1) Graphics
Overlay mouse margin
How sensitive the software is to mouse-over detection. If the mouse pointer is within the
pixel range, it will detect the object of interest.
RGB merge
This specifies the method that will be used to merge a number of color assigned grayscale
channels into a single R,G,B channel. Note that while the combined channel is primarily for
visualization, it can also be processed by segmentation procedures. So the mode of
combination will affect combined channel segmentation processing.
Simple Each R,G,B component from the color input channels is summed and truncated at
a maximum 255 value per component.
Best Fit Each R,G,B component from the color input channels is summed, and then each
component is scaled so that its maximum intensity is 255.
Saturation Each R,G,B component from the color input channels is summed. The R,G,B
Best Fit components are summed and the 95th percentile value is determined. The
components are then scaled individually to 255 based on the value.
(2) Log
In the log section you can adjust the characteristics of CL-Quant’s log file.
Verbosity
Determines the amount of information recorded to the log file.
Formatting
If checked, a time-stamp will be added to log file output.
14
Chapter 1 Getting Started with CL-Quant
1.4 CL-Quant Preferences >> 1.4.3 Advanced Options
On demand loading
When this is checked, CL-Quant will load the images you need into memory when needed,
rather than up front. This can make CL-Quant large image or large image set loading appear
more responsive, however batch processing will be slower. Check this when you don’t want to
wait a long time for a large data set to load, and you are unconcerned about total processing
time.
ImageJ executable
You can specify the location for the ImageJ program executable. This is needed in order to
run ImageJ functions in CL-Quant.
Movie compression
Options for saving image sequences to AVI format.
15
Chapter 1 Getting Started with CL-Quant
1.4 CL-Quant Preferences >> 1.4.4 Recipes and Procedures Options
16
Chapter 2
Using CL-Quant
Chapter 2 Using CL-Quant
Chapter Overview
Chapter Overview
This chapter describes using CL-Quant for image examination, applying recipes and
procedures, and performing data analysis. Creating recipes and procedures is described in
Chapter 3, “Teaching CL-Quant”.
18
Chapter 2 Using CL-Quant
2.1 User Interface
When you open the software, you will see three panels:
- Main panel:
A large blank workspace
- Data explorer:
Top panel on the right. A repository for data objects including “FOVs” (Fields of View), and
“Procedures” (and also recipes).
- Controls panel:
Multi-tabbed panel at the bottom. Contains data object information in the “Properties” tab and
key controls in the “Properties” and “Controls” tabs.
When you display FOVs, they will appear in RecognitionFrames (RFrames). RFrames are the
primary graphical interface in the software. RFrames are described in Section 2.1.2,
“RecognitionFrame”.
19
Chapter 2 Using CL-Quant
2.1 User Interface >> 2.1.1 Data Explorer
If you have multiple objects highlighted, you may choose the option “Display All”. As the
name suggests, this option will open new windows in the main panel, each one displaying an
object that was highlighted when you chose the option.
To hide multiple data objects, select multiple data objects in the data explorer by holding the
“Shift” or “Ctrl” key while left-clicking them. When finished selecting, right-click on one of the
selected objects and then left-click on the option “Hide All”.
20
Chapter 2 Using CL-Quant
2.1 User Interface >> 2.1.1 Data Explorer
NOTE:
If you have not saved that object to disk, removing it from the data explorer will mean
losing any changes you made.
To remove multiple data objects, select them in the data explorer, right-click on one of the
selected objects, and then left-click on the option “Remove”.
To specifically select only a few objects, hold down the “Ctrl” key and click on any unselected
object.
NOTE:
Using the “Ctrl” key to click on a selected object will deselect it.
21
Chapter 2 Using CL-Quant
2.1 User Interface >> 2.1.2 RecognitionFrame
2.1.2 RecognitionFrame
When you load one or more multi-dimensional images they will be displayed in the
RecognitionFrame (RFrame). The RFrame is the primary graphical user interface in the
software and can display a single FOV, or multiple FOVs in an FOV list. It contains all the
elements of an imaging experiment in a single window. The primary elements of the interface
are:
- Image panel,
- Content panel,
- Toolbar.
22
Chapter 2 Using CL-Quant
2.1 User Interface >> 2.1.3 Toolbar
2.1.3 Toolbar
The RFrame toolbar provides key functionalities for image and mask examination, manipulation,
and data analysis. It also provides ways to create enhancement, gating and decision
procedures.
Pointing tool
Magnifier glass
Zoom to rectangle
View masks
View channels
View overlay
Channel and mask manager
Toggle metadata overlay
Modify frames
MIP tool
23
Chapter 2 Using CL-Quant
2.1 User Interface >> 2.1.3 Toolbar
Toggle scale overlay - Shows scale bar with calibration (if available) overlay on the image.
Crop image frames - Crops the FOV to a specified time window and sampling steps.
Toggle selected objects - Toggles object bounding box or track display for selected objects.
24
Chapter 2 Using CL-Quant
2.1 User Interface >> 2.1.4 Zoom Controls
NOTE:
Turn LUT on at all times when viewing/processing 16-bit images to ensure best picture
contrast.
Reset LUT
This button will reset any LUT settings, and turn off the LUT.
Fit to screen
The image will be zoomed to fit completely on the screen.
Best fit
The image will be zoomed to fit the smaller of its height or width to the screen.
Zoom 1:1
This will show the actual pixels, meaning one display pixel for one image pixel, or 100% zoom
level.
25
Chapter 2 Using CL-Quant
2.1 User Interface >> 2.1.4 Zoom Controls
26
Chapter 2 Using CL-Quant
2.1 User Interface >> 2.1.5 Minimap
2.1.5 Minimap
Minimap control is located under the zoom controls toolbar in the RFrame. To turn on minimap
display, click on the minimap control icon.
NOTE:
Minimap display will not appear if the entire image is visible. Zoom in anywhere on the
image to view.
Minimap display
When enabled, the minimap display is shown in the upper left corner of the image panel
overlaid on the image. Note that masks and object overlays on the image are not visible in the
minimap view. The minimap is an abstract representation of the entire image. The displayed
region is shown as a neon-green rectangle that shows the current view in relation to the entire
image.
27
Chapter 2 Using CL-Quant
2.1 User Interface >> 2.1.6 Metadata Overlays
NOTE:
Metadata information may not be available for all images.
You can burn the metadata overlay display on the image by selecting on the “Burn Metadata”
option accessed by clicking on the arrowhead below the option with Metadata overlay
enabled.
NOTE:
This action will modify the image and cannot be reversed. Any pixels and image
information behind the overlay burnt region will be lost.
You can burn the scale overlay display on the image by selecting on the “Burn Scale” option
accessed by clicking on the arrowhead below the option with Scale overlay enabled.
NOTE:
This action will modify the image and cannot be reversed. Any pixels and image
information behind the overlay burnt region will be lost.
28
Chapter 2 Using CL-Quant
2.1 User Interface >> 2.1.7 ROI Tool
When selected, click-and-drag over an area on the image to create a region of interest. A pink,
rectangular border will indicate the selected region of interest. You can move the ROI by holding
down the left-click while dragging inside the ROI. To resize the ROI, hold down the left-click
while dragging on the ROI border.
Copy image
Copies image inside ROI onto clipboard scaled relative to the actual image size
Copy ROI
Copies the ROI frame only
Paste ROI
Pastes ROI at the original location as the copied ROI
Delete ROI
Deletes the ROI frame
ROI properties
Displays ROI properties in a new window
29
Chapter 2 Using CL-Quant
2.1 User Interface >> 2.1.7 ROI Tool
Crop ROI
Crops image to ROI frame
Center X
Horizontal position of the ROI center
Center Y
Vertical position of the ROI center
Width
Width of the ROI
Height
Height of the ROI
Use the textbox to adjust the ROI attributes as needed. Click “Apply” to update ROI position
and size.
30
Chapter 2 Using CL-Quant
2.2 Loading and Saving Files >> 2.2.1 Loading FOVs
This section covers loading and saving files and other data objects, and exporting FOVs and
data.
To learn how to configure import procedures, which can automate the import of FOVs and FOV
elements (masks and channels), see Chapter 3, “Teaching CL-Quant”.
Image frames can be imported to create a multi-dimensional FOV (see Section 3.4.1, “FOV
Import Procedure”).
- Select the file in a Windows Explorer window and drag it into the software.
or
- Click on the “File” menu and then click on the “Open” option, which will prompt a dialog
window that allows you to browse your computer and local network to the FOVs you wish
to load into the software.
When you load an FOV into CL-Quant, it will be displayed in an RFrame. It will also appear in
the data explorer panel under the “FOVs” tab.
If you load multiple image files, by default they will be loaded into an FOV List. Loading an
ICS/IDS NEX file will load all of the linked files into an FOV list. Also loading BioStation CT 3.0
format CSV file will load any linked FOVs (i.e. from a well plate) into an FOV list. You can also
create an FOV list by right clicking on the FOV list folder.
You can modify the CL-Quant policy for displaying FOVs when loading, and also for grouping
FOVs into lists in the CL-Quant general options.
The loading of 20 x 20 (1,000 x 1,000 pixels) frame, two channel FOVs has been validated for
the recommended platform.
31
Chapter 2 Using CL-Quant
2.2 Loading and Saving Files >> 2.2.3 Loading Other File Types
There are three ways of loading BioStation CSV format stitched images, which are configurable
in the CL-Quant options settings (under “File > Options”):
- Fixed stitching:
Pixel overlap between adjacent frames specified in the CSV format meta-data is used for
stitching.
- User stitching:
The user can specify the pixel overlap amount.
- Auto stitching:
CL-Quant will automatically align the input image frames for stitching.
When you load or import the stitched FOV, a load window will appear that gives you the option
of selecting a time range or sampling interval for loading, the amount of pixel overlap for
stitching (when in “User” mode).
The arrangement of frames into stitched images, and stitched images into multi-channel FOVs
can also be accomplished through stitched image FOV import (see Section 3.4.1, “FOV Import
Procedure”).
Loading a recipe
You can always load recipes by simply dragging and dropping them into the software, or by
clicking on the “Open..” option found in the “File” menu.
Loading a procedure
Procedures can also be opened via drag and drop or “File > Open”.
If you have to load several procedures at once (and do so several times), you should create a
list of those procedures. This is described below in Section 2.2.6, “Saving other file types”.
You can also save the FOV to TIFF format by right-clicking on it in the data explorer under the
“FOVs” tab. Click on the option “Save”, and a save window will open.
On the “General Options” page you can select the “Load and save objects and data” option,
which will allow CL-Quant to save FOV’s objects, data and subsets.
You may save multiple FOVs by selecting multiple FOVs (“Shift” or “Ctrl” click) at once, or by
selecting the FOV list data object.
32
Chapter 2 Using CL-Quant
2.2 Loading and Saving Files >> 2.2.5 Saving FOV Images
NOTE:
“Save displayed image” saves one image at its original resolution. “Save Snapshot”
saves multiple FOVs displayed in the RFrame image panel exactly as they appear on
your monitor.
33
Chapter 2 Using CL-Quant
2.2 Loading and Saving Files >> 2.2.6 Saving Other File Types
Segmentation, measurement and decision procedures can be saved with or without the
teaching images. This is a configurable option under the options menu. See Section 1.4,
“CL-Quant Preferences”.
If you save and reload the procedure without the teaching images, you won’t be able to view
the teaching images used to create the procedure.
You must specify the processing order of the procedures by arranging their order in the list.
You can lock the list order by right clicking and selecting “Lock Procedure List”. When a
procedure list is locked, it will appear as a single data object (you cannot see the
sub-procedures), which can reduce the apparent complexity for users, and the
sub-procedures cannot be accessed.
To save, right click on the procedure list data object and choose “Save”, or select the data
object and then choose “File > Save”. The procedure list will be saved to a single file on the
file system.
34
Chapter 2 Using CL-Quant
2.2 Loading and Saving Files >> 2.2.7 Exporting FOVs
35
Chapter 2 Using CL-Quant
2.3 Manipulating FOVs >> 2.3.1 Edit FOVs
An FOV can contain images, segmentation masks, field measurements, object measurements,
object measurement statistics, and object ROIs. FOV objects may also have multiple subset or
class memberships. Initially a FOV contains sets of images, aligned in x,y and arrayed in up to
three dimensions; z, time and channel.
Once an FOV is loaded, you can manipulate the FOV by manually editing its channels and
masks, calibrating its spatial and intensity units, merging and appending multiple FOVs into one,
and cropping the FOV in x,y and t dimensions.
- Adding a channel
To add a channel (be sure to have an FOV loaded), right-click on the bottom strip (in the
blank area or on a channel tab itself) of the FOV and select “Add Channel”.
- Inserting a channel
To insert a channel, click on the channel tab positioned just after where you wish to insert a
channel, right click on the channel tab and select “Insert Channel”.
- Removing/renaming a channel
To remove or rename a channel, simply right-click on the channel tab you wish to remove or
rename, and select the appropriate option.
- Adding a mask
To add a mask (be sure to have an FOV open), you can right-click on the mask strip (along
the top of the image panel) of the FOV and select the only option, which is to “Add Mask”.
- Removing/renaming a mask
To remove or rename masks, simply right-click on the mask tab of the mask you wish to
remove or rename and select the appropriate option.
36
Chapter 2 Using CL-Quant
2.3 Manipulating FOVs >> 2.3.2 Merge FOVs
To merge the two FOVs, display the destination FOV in an RFrame, and then drag the FOV to
be merged from the data explorer onto the channel tab of the destination FOVa new channel
will be created automatically. The image you want to merge must have the same dimensions
(x, y) as the existing FOV.
Simply drag the FOV to be merged onto the mask strip of the destination FOVa new mask
will be created automatically.
You may either append the FOV before or after the current frame. You may merge as many
FOVs into the destination FOV’s time dimension as you like. Display the destination FOV in an
RFrame. To insert before the current frame, drag the FOV to be merged from the Data Explorer
to the left of the time slider. To insert a movie after Frame C, drag the FOV to be merged to the
right of the slider.
37
Chapter 2 Using CL-Quant
2.3 Manipulating FOVs >> 2.3.4 Crop FOV
Your cropped FOV will appear in the data explorer under the FOVs tab and folder.
Once you select your options, click “OK” and a new FOV will be created. It will appear in the
data explorer.
38
Chapter 2 Using CL-Quant
2.3 Manipulating FOVs >> 2.3.5 Image Alignment
Image alignment may also be necessary in fluorescence or other dead-cell assays when the
mechanical system incorrectly positions a sample for microscopic analysis. In this case,
alignment can be called upon to adjust the image so that samples taken a various times and
positions can be aligned in the spatial domain.
It is important to note that software image alignment always results in some degree of x, y
cropping of the aligned images. This is because the images are shifted in x and y the exact
amount necessary to align the two images. When an image is shifted, the pixels around the
border of the image that previously existed outside the bounds of the original image are now
brought into the image area. Since this data is “unknown”, the new pixels are set to zero so
there can be no misinterpretation in the resulting image. For this reason, the aligned image will
always contain less pixel data than either of the two images provided as input. The resulting
aligned image will contain valid pixel information for the area where the two images intersect.
The following three alignment tools are provided depending on the application requirements.
The ROI-based alignment operations require at least one ROI defined anywhere within the time
series. When multiple ROIs are specified, they must be specified at the same time index (same
image).
In RGA, all ROIs are statically positioned and do not move over the course of the alignment
analysis. RGA is useful if you have an object, or series of objects, that do not move over the
entire course of the time series. For example, a Petri dish may have a reference marker
etched into the glass or an imperfection in the glass that can be used as a reference. In this
case the user would select a ROI around the reference marker so RGA can align all image
data to that reference.
In RIA, the positions of all ROIs are adjusted with each successive frame in the time series.
39
Chapter 2 Using CL-Quant
2.3 Manipulating FOVs >> 2.3.5 Image Alignment
Thus, RIA is capable of tracking multiple objects that may be uniformly drifting in position
during the course of an experiment. This alignment mechanism should not be used in
applications where there is differential movement between the objects as the differential
movement may send contradictory feedback to the alignment engine.
- Select objects that are stationary relative to all the other cells in the image (RGA only).
- Select objects with good contrast over those with poor contrast.
- When possible, select objects in the center of the field of view, so that they are less likely to
move outside the field of view.
- Select objects in the middle of a long movie so that overall drift is minimized.
- Select objects that remain within the field of view the longest.
- Select multiple objects when no single object remains within the field of view for the entire
movie.
- Select multiple objects to improve the accuracy and robustness of the alignment.
Alignment Limitations
Image movement must be less than a quarter of the x and y dimensions of the image. In our
tests, the image dimensions were 1,000 x 1,000 pixels so the limit for maximum movement
was 250 pixels. Movement between successive images greater than 250 will result in loss of
alignment.
40
Chapter 2 Using CL-Quant
2.3 Manipulating FOVs >> 2.3.6 Modify Frames
Reverse T
This option will reverse the FOV’s time dimension and the frame order of the T-series.
Reverse Z
This option will reverse the FOV’s axial/Z dimension and the frame order of the Z-stack.
41
Chapter 2 Using CL-Quant
2.3 Manipulating FOVs >> 2.3.7 MIP Functions
There are five types of MIP that can be generated using the MIP function.
Min Projection
Outputs an MIP image containing the minimum intensity for each pixel over the entire Z-stack.
(Figure 1).
Max Projection
Outputs an MIP image containing the maximum intensity for each pixel over the entire Z-stack.
(Figure 2).
Mean Projection
Outputs an MIP image containing the mean intensity for each pixel over the entire Z-stack.
(Figure 3).
Median Projection
Outputs an MIP image containing the median intensity for each pixel over the entire Z-stack.
(Figure 4).
HSDEF Projection
Outputs an MIP image containing the 2D extended depth of focus (EDF) intensity projection
image. The EDF projection combines the in-focus pixels of each Z-stack image into one
image, ensuring that the projection output is all in focus. (Figure 5)
42
Chapter 2 Using CL-Quant
2.4 Examining FOVs >> 2.4.1 Freehand Measurement Tools
The examination toolbar contains functions related to image, mask and object region of interest
viewing. This section describes many common and useful interface features.
When toggled, the annotation tools will be displayed in the “Annotations” tab in the Controls
panel.
Length measurement
Measures the straight line length of a drawn line
Area measurement
Measures the area of a drawn region
Angle measurement
Measures the angle between two drawn lines
Click on any of the measurement tools and start drawing on the image. The annotations will
appear in the Fields panel in the “Annotations” tab under the measured image name.
If the image is calibrated, users can change the measurement unit by selecting a desired unit
from the display unit dropdown menu.
43
Chapter 2 Using CL-Quant
2.4 Examining FOVs >> 2.4.2 Navigate the FOV or FOV List
In addition, there is the “Grabber” tool which allows you to navigate in x,y by dragging your
mouse when this tool is selected. Click and hold your mouse pointer on the image to get the
grabber tool.
Each green block represents an image in the sequence. The green arrow plays the sequence.
Next to that are controls for stepping through the sequence and advancing to the end of the
sequence. You can adjust the playing speed using the +/- buttons or set playing speed to real
time (R) or maximum (M) using the controls on the right.
44
Chapter 2 Using CL-Quant
2.4 Examining FOVs >> 2.4.3 Look Up Table
Access the LUT panel by clicking on the “Toggle LUTs window” button on the zoom control
toolbar or pressing “Ctrl”+“Alt”+“L” on the keyboard.
45
Chapter 2 Using CL-Quant
2.4 Examining FOVs >> 2.4.4 Magnify Tool
You may use the zooming features built into the RFrame, or if you just need to take a quick look
at a particular feature in the FOV, you can use the magnify tool. The icon is found in the
examination tools area of the tool bar.
While activated, the image will be magnified while you hold down the left mouse key.
You can also zoom to a specific region of the image using the “Zoom to Rectangle” tool on the
toolbar. Left-click and drag over the region you wish to magnify and release the mouse button to
zoom. The “Zoom to Rectangle” tool icon is shown below.
1. Click the “Show” or “Hide” button to visualize the selected pixels in the range.
46
Chapter 2 Using CL-Quant
2.4 Examining FOVs >> 2.4.6 Line Profiler Tool
To access the tool click on the “Toggle line profiler tool” icon.
When draw a line across the image, a trace of the intensity values are shown overlain on the
image as shown in the figure below.
The channel and mask manager can be used to display multiple masks as well as changing the
attributes of individual channels and masks. An example of the channel and mask manager
dialog is shown below.
Channels are shown in gray with masks from the respective channels displayed in blue and
organized under the specific channels. A lighter color indicates the channel/mask is displayed
on the image.
47
Chapter 2 Using CL-Quant
2.4 Examining FOVs >> 2.4.8 Calibration Properties
Channel options
To rename a channel, right-click on the channel panel and select “Rename channel” from the
context menu. User can also specify the channel color (the color which appears in the RGB, if
enabled) and include/exclude the channel from the RGB-combined channel using the options
shown above.
Mask options
Multiple masks can be displayed by toggling the eye icon in the mask options. Use the slider to
adjust the mask transparency and the mask color pulldown menu to specify mask color. User
can opt to color individual mask component differently by enabling the Color by label option.
Note that the Color by label option when enabled will override the displayed mask color. If a
detection mask is used for object measurements, the primary mask for measurement will be
indicated with the suffix “(Primary)”.
48
Chapter 2 Using CL-Quant
2.4 Examining FOVs >> 2.4.9 Gallery View
Use the textbox provided to adjust the time and X/Y calibration settings value. Click on the
measurement drop down menu to the right to select the calibration units, or disable calibration
by selecting the “Uncalibrated” option from the drop down menu.
To launch Gallery view, you must have objects or trajectories selected. Click on the “Gallery
View” icon on the magnification toolbar above the image.
User interface
The Gallery View user interface is split into three sections:
- Thumbnail gallery
Displays the selected objects as small thumbnails. Options are provided for sorting and
displaying specific image statistic.
- Filmstrip
Allows user to “focus” on a single image thumbnail and displays the selected object at other
time points.
49
Chapter 2 Using CL-Quant
2.4 Examining FOVs >> 2.4.10 BioStation CT Data Viewer
To launch BioCT data viewer, go “File > Open”, and load a BioCT JSON file.
NOTE:
The data explorer, along with files loaded into it, will not be accessible in the BioStation CT
data viewer mode. You must exit the BioStation CT data viewer mode to view non-JSON
image data.
User interface
The BioStation CT data viewer has a specialized user interface that allows the user to quickly
and easily select experiments and FOVs to view.
There are two main panels:
- Image panel
Shows the stitched and loaded BioCT images.
50
Chapter 2 Using CL-Quant
2.5 Apply Procedure
When you have loaded a procedure in the data explorer, activate the Controls panel at the
bottom of the workspace and select the “Measure” tab.
To apply a procedure:
2. Select the FOVs that you will apply the procedure in the image panel.
Note that for FOVs with time and z extent you can select a range and sampling for apply.
51
Chapter 2 Using CL-Quant
2.6 Data Analysis >> 2.6.1 Fixed Point Analysis
CL-Quant allows the user to view measurement data and provides tools to review or gate object
data. This section provides information on performing fixed-point analysis and kinetic analysis
on the image.
Objects
When you apply object segmentation and measurement procedures to an FOV it will create
objects. Objects are a primary unit of analysis that would likely correspond to cellular or
subcellular phenomena depending on your application.
An object is associated with its set of measurements, and can be represented as an ROI
overlain on the FOV, a row in the object tab of the spreadsheet, or as a data point in the
charts.
Each FOV in the FOV List will have its own object tab in the spreadsheet which will contain
the object measurements as shown in the screen capture below.
52
Chapter 2 Using CL-Quant
2.6 Data Analysis >> 2.6.1 Fixed Point Analysis
Subsets
Object subsets are collections of objects from one or more FOVs. A subset can be created
manually by gating or through the application of a decision procedure. Subset objects are not
removed from their original FOV, they exist in the subset as well as the FOV. When a subset
is created, a new object tab is created in the spreadsheet containing the member object data.
Automatic subset creation can only be done with decision procedures. Decision
procedures can only be created if your version supports the decision module.
Object linking
Any object you select, regardless of where you selected them (e.g. image panel, chart,
spreadsheet), will automatically be highlighted in the spreadsheet, charts, and image views.
Under the “Toggle Objects” icon, there is an icon for “Toggle selected objects”. In this
mode, only the bounding boxes for objects selected in charts or spreadsheets will be
displayed.
Color objects
Objects can be colored and identified based on their FOV or subset membership. If there are
object subsets in the FOV, you can click on the “Controls” tab located at the bottom of the
rightmost panel to display the object color options.
53
Chapter 2 Using CL-Quant
2.6 Data Analysis >> 2.6.1 Fixed Point Analysis
The first highlighted “F” icon adjusts color options for the color by FOV mode.
The “S” icon will allow you to adjust the color by subset mode options.
You can adjust the opacity of the colors by clicking on the drop-down menu (you may
have to scroll to the right to see everything).
(3) Gating
You can manually gate a subset by selecting objects in charts, on images, or in the
spreadsheet.
54
Chapter 2 Using CL-Quant
2.6 Data Analysis >> 2.6.1 Fixed Point Analysis
You can select objects in a histogram by clicking one or more of the histogram bars, which
turn red when selected (as shown here). To gate these objects select “Analysis > Gate” from
the menu bar.
55
Chapter 2 Using CL-Quant
2.6 Data Analysis >> 2.6.2 Kinetic Analysis
This will display the object tracks. Tracks are a series of connected vertices showing the
location of the tracked object at various time points.
The figure above shows a track overlaid in orange on a moving cell (the segmentation mask
is also shown in cyan). The current location of the object is shown as a green, circular ROI
(red arrow, above). The x,y location of the object at previous time points is shown as orange
boxes connected by a colored line.
56
Chapter 2 Using CL-Quant
2.6 Data Analysis >> 2.6.2 Kinetic Analysis
Adjusting tracks
You can adjust the track length with “Track Visibility” parameter from the “Track Display
Options” dialog. To launch the dialog, click on the “Track display options” button on the
toolbar (shown below).
The “Track Display Options” dialog has many options to adjust the track appearance and
presentation. The dialog is shown below.
Thickness You can adjust the thickness of the track by selecting the desired thickness
here.
Anchors You can choose to hide all or some anchors or show all anchors. Anchor sizes
can be adjusted by changing the track thickness.
Fade You can change how long tracks are displayed to maintain same track
appearance for all track points or to have the track fade with older track points.
Track Visibility
When this parameter is set to 1, only the current track location, denoted by a green circle
showing the active anchor is visible. When this parameter is set to all, the whole track is
displayed. The pull-down menu and the slider provide precise step definition that can be
set to show a set number of previous track points with current point.
57
Chapter 2 Using CL-Quant
2.6 Data Analysis >> 2.6.2 Kinetic Analysis
Show Contacts
Use the pull-down menu to specify how the location of which two tracks intersect is
shown.
Coloring
Use the pull-down menu to specify how track colors are shown. You can specify the track
colors by state, direction, lineage, or track number.
Location Map
Use the pull-down menu to enable the display of specific location of a track at a certain
time point. There are two location map options - first frame and last frame.
Data types
- Dynamic data
As shown in the figure above, dynamic data appears in the first tab. Dynamic data
characterizes a single time point. It could be instantaneous information, such as the
average intensity of the object at that time point. It could also be accumulated information,
such as the curvilinear velocity of the object from its initial time point until the current time
point, or within a moving window t frames before the current time point.
- Track data
Track data appears in the second tab with the “-Tracks” suffix. Track data, such as total
length, first frame, etc. characterize the entire track.
58
Chapter 2 Using CL-Quant
2.6 Data Analysis >> 2.6.2 Kinetic Analysis
Hold ZT
By default, every time you advance one frame, the spreadsheet will update with the
dynamic data for the current time frame, which can be inefficient. You can suspend
spreadsheet update by selecting the “Hold ZT” option to disable automatic update.
Update ZT
Select this option to display the dynamic data for the current frame.
Auto Advance
Select this option to automatically advance image sequence to the first frame of when the
selected object appeared.
59
Chapter 2 Using CL-Quant
2.6 Data Analysis >> 2.6.2 Kinetic Analysis
Vertical
measurement
name
60
Chapter 2 Using CL-Quant
2.6 Data Analysis >> 2.6.2 Kinetic Analysis
Click on the axis title of the secondary axis to select a measurement to plot. Measurement
being plotted on the primary y-axis will be displayed in parentheses in the measurement
selection menu. An example of a two-axes trace plot with acceleration and velocity
magnitude charted is shown below.
Selected object
61
Chapter 2 Using CL-Quant
2.6 Data Analysis >> 2.6.3 Charting
The axes value can be adjusted by clicking on the min and max values of the respective axes
(circled below). A textbox will replace the axis labels. Enter a new value into the textbox and
press Enter to confirm. The axis will be renumbered based on the inputs.
2.6.3 Charting
Multiple chart types are supported. To change the type of chart you are using, click on the
“Charts” tab in the Controls panel and select the available option as shown below.
NOTE:
Charts will only become enabled after applying a recipe. Not all chart types are available for
all recipes.
Chart options
Additional chart options are shown on the charts toolbar. These options allow you to modify the
chart appearance and export chart data.
62
Chapter 2 Using CL-Quant
2.6 Data Analysis >> 2.6.3 Charting
Show/hide legends
Toggles legends display on charts.
Reset axes
Resets chart axes to default.
- Chart legends
When plotting multiple objects, the plots are shown in different colors. The chart legends
provide information about the object plots and the corresponding objects. To toggle legends
display, click on the “Show/hide legends” icon in the charts toolbar. The legends are overlaid
on the chart in the upper right corner of the chart.
- Chart axes
User can change the chart axes range by clicking on the minimum value and the maximum
value on each axes. The cursor will change to an I-beam when hovering over the axes values
indicating that they can be adjusted. User can reset the chart axes back to default using the
“Reset axes” option.
63
Chapter 2 Using CL-Quant
2.6 Data Analysis >> 2.6.3 Charting
(1) Histogram
Histogram present data for one or more object sets (images and subsets). When you select
the object set in the summary tab of the spreadsheet, its data will be plotted in the chart.
64
Chapter 2 Using CL-Quant
2.6 Data Analysis >> 2.6.3 Charting
To change the color of the selected dots, click on the palette icon on the charts toolbar
65
Chapter 2 Using CL-Quant
2.6 Data Analysis >> 2.6.3 Charting
and select a new color from the list provided or add a new color.
66
Chapter 2 Using CL-Quant
2.6 Data Analysis >> 2.6.3 Charting
Normalize
Normalizes the measurement axis to a fixed scale between 0 (min) and 1 (max).
Reset axes
Resets chart axis.
67
Chapter 2 Using CL-Quant
2.6 Data Analysis >> 2.6.3 Charting
Secondary axis
Adds a secondary Y-axis to the trace plot on the right. You can click on the axis label in the
secondary axis to plot secondary measurement.
Color by measure
Colors object trace plot by measurement type.
Color by object
Colors object trace plot by object.
Color by group
Colors object trace plot by selected multipoint or subset membership.
Image 1 (M = 0)
Image 2 (M = 1)
Object trace plot for selected trajectories in image 1 (top) and image 2 (bottom)
As shown above, objects can be individually-selected from each of the multi-point image and
plotted in the Multiple trace plot window. The same specified measurement trace plots are
plotted for both the multi-point images.
68
Chapter 2 Using CL-Quant
2.6 Data Analysis >> 2.6.3 Charting
Click on an object trajectory on the image to select and plot the track. You can plot multiple
trajectories by holding down the “Shift” key while selecting trajectories from the image. To plot
all object tracks on the directional plot, right-click on the chart and select the “All” option in the
context menu to plot all objects.
69
Chapter 2 Using CL-Quant
2.6 Data Analysis >> 2.6.3 Charting
Directional plot with current anchor only Directional plot with all anchors
70
Chapter 2 Using CL-Quant
2.6 Data Analysis >> 2.6.4 Object Histogram
71
Chapter 2 Using CL-Quant
2.6 Data Analysis >> 2.6.4 Object Histogram
To plot object histograms, click on the object bounding boxes or trajectories on the image. The
intensity histograms for individual object are plotted on the histogram plot.
Selecting channels
You can specify the channels in objects histogram by clicking on the dropdown menu next to the
“Channels” label. You can select individual channels, all, or no channels from the menu and the
object histogram plot will be adjusted accordingly.
Show/hide legends
Toggles legends display for object histograms.
72
Chapter 2 Using CL-Quant
2.6 Data Analysis >> 2.6.4 Object Histogram
73
Chapter 2 Using CL-Quant
2.7 Track and Lineage Editing >> 2.7.1 Track Editing Grid
CL-Quant provides a way for the user to create new tracks or modify existing tracks and/or
lineage in an image sequence to optimize the result. To launch the track and lineage editing
mode, click on “Toggle trajectory editor” icon on the toolbar (below).
Snap to peak
Enable this option to use peak detection algorithm and detect likely track point location
when interpolating track points.
Undo
Undo the previous track editing action.
Redo
Redo the last undone track editing action.
74
Chapter 2 Using CL-Quant
2.7 Track and Lineage Editing >> 2.7.2 Editing Tracks on the Grid
Track list
Selected tracks will appear in the track list. To remove a track from the track list, click on the
pushpin icon ( ) next to the track name to “unpin” the track from the grid. You can also click on
the “Pin all tracks to list” icon ( ) to unpin all tracks. An empty outline of the pushpin icon ( )
indicates the track is unpinned. Click anywhere on the image to remove the unpinned track from
the track list. Note that the track will not be removed from the image, only removed from view in
the grid.
Timeline
Track points of listed tracks are displayed in the timeline. Tracks appear like a “string of pearls”
in the timeline view, showing the start and end frame of the trajectory. The green highlight
illustrates the current time frame; double-click on a frame to change the current frame position.
After selecting the tracks you wish to edit, you can begin manipulating the tracks using the
Track Editing Grid. You can connect two tracks at different point in time or break a track into
multiple components. Following sections will provide instructions on how to connect and
disconnect tracks in the Track Editing Grid.
75
Chapter 2 Using CL-Quant
2.7 Track and Lineage Editing >> 2.7.2 Editing Tracks on the Grid
Two tracks can be joined together even if they do not overlap or appear in adjacent time
frames. The software will automatically insert the missing time points and interpolate the
position.
There are two ways that the track point position is interpolated - by mean distance traveled or
by peak detection.
- Peak detection:
The software will attempt to interpolate the intermediate points based on their most likely
position (i.e. at locations where image intensity is the highest). You can turn on peak
detection by toggling the magnet icon ( ) in the track editing toolbar.
A comparison between the interpolation approaches are shown below.
Figure 1 Figure 2
The mean distance traveled method (Figure 1) may place interpolated track points incorrectly
if they are not near the linear path whereas the peak detection method (Figure 2) will attempt
to place the interpolated point based on the pixel intensity of the image.
76
Chapter 2 Using CL-Quant
2.7 Track and Lineage Editing >> 2.7.2 Editing Tracks on the Grid
In the example below, Track 2 is split into two separate tracks, one spanning frames 0 through
3 and the other starting from frame 4, by dragging the track point at frame 3 down to an open
row in the Track Editing Grid.
Separate a track by dragging a track point to an open row in the Track List
Old track will terminate at the track point selected and a new track begins in the frame after it.
Highlight the point you wish to remove (hold “Shift” key to select multiple points)
77
Chapter 2 Using CL-Quant
2.7 Track and Lineage Editing >> 2.7.3 Editing Tracks on an Image
When you start the process of creating a new trajectory, the time lapse sequence will advance
by one frame automatically. A red arrow shows the direction of the cursor with respect to the
previous track point location. Left-click on a location on the image where you’d like to set the
track point and the sequence will advance by another frame. The drawn track will be shown
as a red line overlaid on the image.
Please note that you will not be able to modify previous track points or to skip frames when
defining track location while you are in track creation. You may navigate the time sequence
using the time control below the image. Further track edits are possible after the completion of
track creation as explained later in this section.
When you are done with drawing the track, right-click anywhere on the image and select “End
Trajectory” from the context menu. A new track will be created.
78
Chapter 2 Using CL-Quant
2.7 Track and Lineage Editing >> 2.7.3 Editing Tracks on an Image
Like in track creation, a red arrow originating from the last track point will indicate the direction
your cursor is pointing. Left-click on the image to set the location of the track point, the
sequence will then advance to the next frame.
When you are done with track extension, right-click anywhere on the image and select “End
Trajectory” from the context menu to complete the operation. For track creation and
extension, you may also connect two tracks by right-clicking on a track point and select “End
Trajectory” from the context menu. You will be prompted to either end the track at the
selected track point (connect tracks) or create a new track point and maintaining the existing
tracks.
79
Chapter 2 Using CL-Quant
2.7 Track and Lineage Editing >> 2.7.3 Editing Tracks on an Image
Connecting and disconnecting lineages will be further explained in the next section, Editing
lineage.
80
Chapter 2 Using CL-Quant
2.7 Track and Lineage Editing >> 2.7.4 Editing Lineage
To connect a lineage, drag a point on the parent track to any point on a child track to connect.
Unlike Track Editing, drag-and-connect will not merge the two tracks, the pending
connections are shown as a gray dotted line connecting the parent to the child. Drag a point
on the parent track to any point on the second child track to complete the lineage connection.
81
Chapter 2 Using CL-Quant
2.7 Track and Lineage Editing >> 2.7.4 Editing Lineage
You can also connect lineages on the image using “Connect Lineage” option from the
context menu. You must have three tracks selected.
You can also right-click on a track with lineage on the image and select “Disconnect
Lineage” from the context menu to disconnect lineage from the selected track.
82
Chapter 2 Using CL-Quant
2.7 Track and Lineage Editing >> 2.7.5 Update Track Measurements
To update track measurements without exiting the track editing mode, click on the arrowhead
icon below the track editing button in the toolbar to see additional track editor options. Select
the tracking procedure you wish to use for the update in the drop down menu. Click the “Update
Now” button to update the track measurements and stay in track editing mode.
Track measurements will be updated automatically when exiting the track editing mode.
83
Chapter 2 Using CL-Quant
2.8 Mask Editing
CL-Quant allows the user to create new mask objects or edit existing mask objects to optimize
segmentation. Click on the Mask Editor icon (below) to begin.
Click on the mask region to bring up the object bounding boxes. You can move the mask
region by holding down the left-mouse button while dragging the mask region; or resize the
region by clicking and dragging the corners and edges of the bounding box.
84
Chapter 2 Using CL-Quant
2.8 Mask Editing
Drawing tools
User may specify a different mask drawing tool in the “Mask Edit Tool” tab. Open the
“LUTs”/“Charts” panel and click on the “Mask Edit Tool” tab to view the available drawing
modes.
Polygon
Polygon
User can specify straight-edged polygons using the polygon tool. To create a region,
left-click anywhere on the image, click once to specify a point on the polygon
Straight line
User can specify a straight-line mask using the line tool. To create a region, left-click
anywhere on the image, click once to specify the end point of the line.
For free form and polygon tools, press “Enter” or double-click on the image to complete a
region.
85
Chapter 2 Using CL-Quant
2.8 Mask Editing
86
Chapter 2 Using CL-Quant
2.8 Mask Editing
Click on an action to display the hand-drawn region on the image. Mask regions that are
added to the image are shown in green on the image and those that are removed from the
image are shown in red on the image. Regions that are drawn, but with no actions specified
are shown in light blue and are listed as “Drawn” in the “Action” column.
87
Chapter 2 Using CL-Quant
2.9 Action History
Action history displays a list of all operations done on an FOV. To view action history, go to
“Tools > View Action History”.
The “Action History” window shows the type of operations done on the FOV and contains
information about the completed operations. Information such as timestamp and description of
the operations is displayed. A list of operations visible in the action history window is shown
below.
You can view only filter action history view by action module by selecting on the action group
pulldown menu.
88
Chapter 3
Teaching CL-Quant
Chapter 3 Teaching CL-Quant
Chapter Overview
Chapter Overview
This chapter covers functionality used to teach CL-Quant for new applications. If you are
interested in creating novel experimental analyses, this is the place to start.
Two main types of teaching are supported: you can update an existing recipe (provided by
Nikon), or create your own, new procedure list from scratch.
Recipe Update
Recipe update is done using the Recipe Console. The Recipe Console provides key parameters
for updates along with instructions and guidelines.
- Other Procedures
FOV import procedure: This procedure automates the import of image frames into
multi-dimensional FOVs, and also multiple FOVs into FOV lists.
Enhancement procedure This procedure automates traditional image processing functions, and
(teaching wizard provided): common tasks (such as adding channels, deleting channels, bit depth
conversions, etc.).
Innovative teaching wizards are provided to make it easy to create core procedures. These are
described in the section below. You can combine multiple procedures in procedure lists and
recipes to streamline image processing and analysis. The recipes can be edited using the Recipe
Console described in the next section. The creation and configuration of the other types of
procedures are described in the following sections.
NOTE:
Procedure creation requires that you have purchased the required functionality. Tracking is
included in the “Tracking” option, and Decision is included in the “Decision” option.
90
Chapter 3 Teaching CL-Quant
3.1 Recipe Console >> 3.1.1 Launch the Recipe Console
The Recipe Console allows the user to view procedures and to modify key parameters within a
recipe. The user can also disable or duplicate specific steps of a recipe to fit their image
processing needs.
The Recipe Console will prompt you to choose an image; if no image is loaded into the data
explorer, click on the “Load” button to load an image into the Recipe Console. The Recipe
Console provides step-by-step preview such that you can also quickly modify key parameters
without evaluating its performance on whole image sequences.
91
Chapter 3 Teaching CL-Quant
3.1 Recipe Console >> 3.1.2 Recipe Console Interface
Control panel
Control buttons
Control panel
The parameter display control panel is at the top of the Recipe Console UI that allows the user to
expand or collapse all visible parameters in a recipe. There are two buttons, as shown below:
Expand all
Shows all visible parameters in the recipe steps.
Collapse all
Hides all visible parameters in the recipe steps.
Main panel
The main panel (Recipe console panel) allows the user to make changes to the recipe
step-by-step with additional descriptions to assist the user in optimizing the parameter settings.
Each step individually can be expanded or collapsed; or user may opt to expand all or collapse
all recipe steps by clicking on the “Expand all” or “Collapse all” button. One example of a recipe
step box is displayed below.
92
Chapter 3 Teaching CL-Quant
3.1 Recipe Console >> 3.1.2 Recipe Console Interface
The recipe step headers have different number of buttons to enable easy access to description,
apply recipe step, and recipe options depending on the configuration of the recipe step.
Descriptions of the buttons are shown below.
Show descriptions
Shows description of the current step in a new window.
93
Chapter 3 Teaching CL-Quant
3.1 Recipe Console >> 3.1.3 Review or Update Key Parameters
In some recipe steps, an additional option to modify the teaching may be available by clicking on
the “Open” (magic wand) icon. Please refer to the Procedure update wizards section for more
information on modifying the procedure teachings.
When modifying a parameter, recipe changes are previewed on the image instantly. Click the
“Apply recipe step” button to save changes to the parameters and to apply the recipe step to
the entire image sequence.
To revert to recipe default values, click on the “Revert” button next to the parameter textbox.
Please note that certain functions (such as segmentation and partition updates) cannot be
reverted.
Alternately, you can click the “Apply recipe step” button on the last step of the recipe to apply
the whole recipe to the image.
94
Chapter 3 Teaching CL-Quant
3.2 Procedure Creation Wizards
To launch the procedure creation wizard, click on the “Wizard” function under the “Tools” menu
and select the procedure type you wish to create (segmentation, measurement, partition,
enhancement, tracking, decision, recipe).
You can also right click on the segmentation, partition or measurement folders in the data
explorer and select “New”. This will launch the respective wizard.
95
Chapter 3 Teaching CL-Quant
3.2 Procedure Creation Wizards >> 3.2.1 Segmentation Teaching Wizard
The segmentation teaching wizard will guide you through the steps of creating a segmentation
procedure. It provides instructions to guide you.
The wizard is designed to create a single detection mask .To create more than one mask, just
run the wizard again on the image with its first mask. This will then create a second procedure
that will create a second mask on the image.
Select FOVs
The first step in the wizard is to select the FOVs you want to use for teaching. Multiple FOVs
can be used. If the FOVs have temporal or Z extent, the first frame in T and the middle frame in
Z will be used.
NOTE:
You can exit the wizard and use the FOV crop tool to extract specific frames from the
multi-dimensional FOV, and then re-launch the wizard and use them for teaching.
The FOVs must have the channel and mask configuration to be included in the
same wizard session.
Soft matching is teachable image pattern recognition technology in the software. It generates
transformed images called a confidence image with enhanced and suppressed patterns based
on user teaching. A segmentation procedure encodes the user teaching and can be used to
automatically transform input images in a high volume mode. The transformed confidence map
can then be thresholded in the next step to generate the detection mask.
Therefore the goal of segmentation teaching using soft matching technology is to teach a
transformation where the structures of interest are cleanly separated (made either dark or
bright) from other structures.
96
Chapter 3 Teaching CL-Quant
3.2 Procedure Creation Wizards >> 3.2.1 Segmentation Teaching Wizard
In this example above, a segmentation procedure was taught to enhance the internal cell
patterns (such as nuclei), and suppress cell borders and bright structures. This confidence
image can be simply thresholded and the mask counted for an accurate measure of cell count
in the image. See the guidelines for successful soft matching in the next section.
(1) Using the region drawing tools, draw the region(s) for one of the patterns of interest.
Drawing Tools
Drawing tools include freehand line tool and freehand region tool.
To draw an ROI, simply depress the tool icon and then draw on the image frame. When
you draw the ROI it will be active, and the segments of the ROI will appear to move
around the region.
(2) Assign the active (moving) regions the pattern type (Enhance, Suppress, or
Background).
After drawing an ROI, the user should assign a pattern type to the active ROIs. The
pixels inside the ROIs become a teaching set for the confidence mapping operation.
97
Chapter 3 Teaching CL-Quant
3.2 Procedure Creation Wizards >> 3.2.1 Segmentation Teaching Wizard
Pattern Types
- Enhance:
Types of image patterns that you want to isolate from the rest of the image, mask and
quantify.
- Suppress/Background:
Types of patterns that you do not want to include in the mask.
Enhance Background
Suppress
To assign pattern types to ROIs, click on the appropriate icon as shown above; the green
check mark is for “Enhance” type, the red “X” is for “Suppress” type and the yellow
square on blue background is for “Background” type. When you click on the icon, all the
active ROIs will be assigned the type of the icon you click.
98
Chapter 3 Teaching CL-Quant
3.2 Procedure Creation Wizards >> 3.2.1 Segmentation Teaching Wizard
99
Chapter 3 Teaching CL-Quant
3.2 Procedure Creation Wizards >> 3.2.1 Segmentation Teaching Wizard
Select to display
the confidence
image
Preview on All
If you select this option the current procedure will be applied to all teaching images for
your review.
100
Chapter 3 Teaching CL-Quant
3.2 Procedure Creation Wizards >> 3.2.1 Segmentation Teaching Wizard
Above the threshold tool settings is the soft matching score. The soft matching score is the
difference of the mean confidence value of enhancement ROIs minus the mean confidence
value of the suppression ROIs. A higher score indicates better separation between the
enhancement and suppression classes.
The soft matching score can serve as a guide to improve soft teaching. Creating teaching
regions with clearer distinction between enhancement ROIs (dotted green region) and
suppression ROIs (red dotted region) will yield a higher score. The figure below shows an
example of good teaching (top) and bad teaching (bottom) with their respective confidence
maps.
Good teaching
(High soft matching score)
Bad teaching
(Low soft matching score)
101
Chapter 3 Teaching CL-Quant
3.2 Procedure Creation Wizards >> 3.2.1 Segmentation Teaching Wizard
[Link] Thresholding
Use the threshold tool (shown below) to create a segmentation mask. The tool highlights a
range of pixels in the image (original, processed or confidence) whose values fall between the
parameters you select. If the pixels are within the values, they will be shown having a red
overlay in the image panel. Thus, if you select all the possible values, then the entire image
panel will be covered in red.
The tool is used to select pixels for inclusion in a segmentation mask. This is the detection
mask which will be quantified later.
1. Click the “Show” or “Hide” button to visualize the selected pixels in the range
Once you select the appropriate range, click “Save threshold to mask” and the selected
pixels (in red) will be saved to your segmentation mask.
The threshold tool has many advanced options which are described below. To access the
advanced options, click on the small button circled in red in the figure above. This powerful tool
has 2 types of thresholds, 4 offset choices and 2 types of units. The default mode is a direct
type, bi-modal offset, and levels units. The default should work fine in most cases.
102
Chapter 3 Teaching CL-Quant
3.2 Procedure Creation Wizards >> 3.2.1 Segmentation Teaching Wizard
Threshold options
Threshold type
“Direct” and “Adaptive” types of thresholding are supported. The adaptive type combines
thresholding with image processing steps into a simple one step operation.
Direct With the direct type threshold, the values you select represent the real pixel
values in the image (i.e. no offset). Thus if you select 0 and 100, pixels in the
image that have values from 0 to 100 will be covered in the red overlay. This
is the standard type of threshold in most image analysis software.
NOTE: If you have performed Soft Matching, the overlay is being calculated
using the pixels in the hidden confidence image, and displayed on
the original, visible image.
Adaptive Often objects of interest cannot be isolated within a single direct threshold.
This could be due to intensity gradients from uneven illumination, or from
global intensity undulations in the image due to accumulation of
fluorescence in biological structures. For example, it is desirable to be able
to threshold puncta in a bright region of the image as well as a darker region
of the image using a single threshold.
103
Chapter 3 Teaching CL-Quant
3.2 Procedure Creation Wizards >> 3.2.1 Segmentation Teaching Wizard
Dynamic Offset
Dynamic offsets enable the threshold values to adjust dynamically as a function of the image
content. The tool supports three types of Bi-modal offsets.
Bi-modal The tool default offset. Usually the pixel values in an image have a bi-modal
frequency distribution corresponding to foreground (structures of interest)
and background pixels. The software automatically determines the bi-modal
point in the image’s pixel intensity distribution and sets that point to zero,
thus making the threshold settings dynamic when applied to a set of images
(each image’s threshold range will vary slightly). The threshold range is
then set with respect to this bi-modal zero point from -255 to 255.
Threshold Units
The tool supports levels and histogram percentile scales.
Levels This is the standard type of threshold and default unit. Here the unit is image
gray level. So for an 8 bit image, the threshold values range from 0 to 255
(range type) or -255 to 255 (bi-modal type).
Histogram The units in this case are percentiles. The percentile is calculated with
Percentile respect to the pixel intensity distribution of the image’s pixels. Thus the
threshold values range from 0 to 100 (range type) or -100 to 100 (bi-modal
type). So for example, if you increase the threshold value to accommodate
one more unit, you will bring pixels into the overlay that corresponds to
increased 1% selection from the pixel intensity distribution.
104
Chapter 3 Teaching CL-Quant
3.2 Procedure Creation Wizards >> 3.2.1 Segmentation Teaching Wizard
105
Chapter 3 Teaching CL-Quant
3.2 Procedure Creation Wizards >> 3.2.2 Segmentation Review and Update Wizard
To update or review an existing procedure, load the procedure and right click on it in the data
explorer and select “Review with Wizard” or “Update with Wizard” as shown below.
106
Chapter 3 Teaching CL-Quant
3.2 Procedure Creation Wizards >> 3.2.2 Segmentation Review and Update Wizard
On the next page, you will be able to review the soft matching results, as well as additional
options for updates and reviews. To review teaching data, select “Review teaching data” and
click “Next”.
107
Chapter 3 Teaching CL-Quant
3.2 Procedure Creation Wizards >> 3.2.2 Segmentation Review and Update Wizard
Steps
1. Review default threshold overlay
At this step, a detection mask with the threshold settings from the procedure will be
displayed. To toggle the detection mask click on the “View overlay” icon.
2. Review confidence image
You can also review the confidence image by clicking the “Show Confidence Image” icon
shown below.
Click to restore
the original
settings.
108
Chapter 3 Teaching CL-Quant
3.2 Procedure Creation Wizards >> 3.2.3 Partition Teaching Wizard
1. Select FOV:
More than one FOV can be selected for teaching.
Select FOVs
The first step in the wizard is to select the FOVs you want to use for teaching. Multiple FOVs
can be used. If the FOVs have temporal or Z extent, the first frame in T and the middle frame in
Z will be used. The FOV must have a mask before it can be used for partition teaching.
NOTE:
You can exit the wizard and use the FOV crop tool to extract specific frames from the
multi-dimensional FOV, and then re-launch the wizard and use them for teaching.
Select mask
If the FOVs have multiple masks you will be prompted to select one mask for teaching. The
wizard can only be used to teach partitioning for one mask at a time (multiple procedures can
be used to partition multiple masks).
109
Chapter 3 Teaching CL-Quant
3.2 Procedure Creation Wizards >> 3.2.3 Partition Teaching Wizard
The Soft Fitting section provides three types of region selection tools and buttons:
Pointer
The pointer tool allows the user to select existing regions on the image.
Smoothing kernel size - Specify the smoothing kernel size for smoothing the Soft fitting
confidence image.
Select the region tool you wish to use and draw directly on the image. Soft fitting uses two
region types - Cut and Keep - as guides for separating confluent cell regions. The Cut
regions are shown with a red dotted border while the Keep regions are shown with a green
dotted border.
110
Chapter 3 Teaching CL-Quant
3.2 Procedure Creation Wizards >> 3.2.3 Partition Teaching Wizard
Example of Soft Fitting teaching with Cut (red) and Keep (green) regions specified
For best results, define a Keep region inside each Cut region to ensure optimal pattern
recognition of the partition regions. When you are finished, click the “Teach” button to view
the confidence image.
As shown in the confidence image above, there are good intensity separation between the
cell boundaries and cell bodies. At this point, you can define addition cell regions and click
“Teach” again to generate a new confidence image.
Next, seed masks are generated that will serve as the guide for partitioning the confluent cell
region masks.
111
Chapter 3 Teaching CL-Quant
3.2 Procedure Creation Wizards >> 3.2.3 Partition Teaching Wizard
The seed masks are generated in the confidence channel image and can be viewed by
toggling their respective masks tab above the image.
You can modify the sensitivity of the seed masks by adjusting the threshold slider in the
Partition section. A higher threshold will result in more refined seeds but may also remove
certain seeds. For best results, use the lowest possible threshold value that will provide you
with good small seeds for all cells.
112
Chapter 3 Teaching CL-Quant
3.2 Procedure Creation Wizards >> 3.2.3 Partition Teaching Wizard
The Create Annotation tool on the toolbar can assist you with specifying the size range.
For best results, measure the region size of the smallest and largest cell in the image.
To view the partition result, click “Apply”. The partition output will be generated in the
“Output” mask in the original image channel. If you are not satisfied with the output, you can
re-teach soft fitting by editing the teaching regions by drawing. If you are satisfied with the
result, click “Save” to save the teaching and “Finish” to quit out of the Partition update wizard.
User can specify the input channel and mask, as well as the output channels and mask in
the “Input & Output” tab of the Partition Wizard.
Note that existing data in the specified output channel or mask will be overwritten by the
Partition Wizard output.
113
Chapter 3 Teaching CL-Quant
3.2 Procedure Creation Wizards >> 3.2.4 Partition Procedure Update Wizard
To review an existing procedure, load the procedure and right click on it in the data explorer and
select “Review with Wizard”.
114
Chapter 3 Teaching CL-Quant
3.2 Procedure Creation Wizards >> 3.2.5 Measurement Configuration Wizard
- Object measurements:
Calculated for discrete mask components individually. An example of an object
measurement would be the mean intensity and area for each individual cell in an image.
- Field measurements:
Calculated for the entire mask. An example of a field measurement would be the wound size
in a scratch wound assay.
When you apply an object measurement procedure, ROIs will appear overlain on the image
identifying individual objects. See Section 2.6.1, “(1) Objects and Subsets”.
115
Chapter 3 Teaching CL-Quant
3.2 Procedure Creation Wizards >> 3.2.5 Measurement Configuration Wizard
116
Chapter 3 Teaching CL-Quant
3.2 Procedure Creation Wizards >> 3.2.5 Measurement Configuration Wizard
Morphology measurement
Area to image size ratio The ratio of the mask area to the area of the whole image field of
view (FOV).
Centroid X The x coordinate of the mask centroid. The coordinate origin is this
top left corner of the image frame. Centroid is essentially the
average x location of the mask (sum of x coordinates divided by
the total number of pixels in the mask). Centroid is only calculated
for binary images (not using the image intensity).
Centroid Y The y coordinate of the mask centroid. The coordinate origin is this
top left corner of the image frame. Centroid is essentially the
average y location of the mask (sum of y coordinates divided by
the total number of pixels in the mask). Centroid is only calculated
for binary images (not using the image intensity).
Length The length of the mask is defined as the number of pixels in the
major axis of the best fitting ellipse to the mask.
117
Chapter 3 Teaching CL-Quant
3.2 Procedure Creation Wizards >> 3.2.5 Measurement Configuration Wizard
Perimeter The boundary of the mask. It is the pixel count of the boundary
pixels just inside the mask. Internal holes in the mask are counted.
ROI Width The width of the bounding box encompassing the object.
ROI Height The height of the bounding box encompassing the object.
Width The width of the mask is defined as the number of pixels in the
minor axis of the best fitting ellipse to the mask.
Intensity measurement
Coefficient of variation in The coefficient of variation is defined as the ratio of the standard
intensity deviation of intensity to the mean of intensity.
High intensity The 95% intensity value measured in the mask, measured on the
grayscale image. 95% is the value below which 95% of the pixel
intensity values fall. The pixels belong to the mask and are
measured on the grayscale image.
Low intensity The 5% intensity value measured in the mask, measured on the
grayscale image. 5% is the value below which 5% of the pixel
intensity values fall. The pixels belong to the mask and are
measured on the grayscale image.
Max intensity The maximum of the intensity values measured in the mask. The
maximum intensity value is the highest value measured from the
pixels in the mask on the grayscale image.
Mean intensity The mean of the intensity values measured in the mask,
measured on the grayscale image. Mean is defined as the sum of
the intensity values in the mask divided by the number of pixels in
the mask.
Median intensity The median is the number separating the higher half of the
sample from the lower half. Here the sample is the pixel intensity
values in the mask measured on the grayscale image.
Min intensity The minimum of the intensity values measured in the mask. The
minimum intensity value is the lowest value measured from the
pixels in the mask on the grayscale image.
Standard deviation of The standard deviation is a measure of the spread of the intensity
intensity values measured in the mask on the grayscale image. The
formula for the standard deviation of a random variable is used.
Total intensity The sum of the intensity values measured in the mask using the
grayscale image.
118
Chapter 3 Teaching CL-Quant
3.2 Procedure Creation Wizards >> 3.2.5 Measurement Configuration Wizard
Measurement renaming
The default naming convention is [Link] measurementName. You can
define your own measurement names (for both regular and derived measures) by right clicking
on the measurement and typing a new name. When you apply the procedure, the name will
appear just as you’ve written it.
Advanced measurements
You can derive measurements using existing measures you created in the selection step.
Common types of derived measures include ratios and differences. When you click on the
“Add” button, a new tab will appear in the wizard containing the derived measurement.
119
Chapter 3 Teaching CL-Quant
3.2 Procedure Creation Wizards >> 3.2.5 Measurement Configuration Wizard
Use an equation
Enable this option to define a customized advanced measurement equation using “first” and
“second” for Measurements 1 and 2 respectively. You can create temporal measurement for
the measurement equations by modifying the time value in the square bracket following the
measurement notations. You can click on the “Add Template” button to create an equation
template for multiple measurements.
120
Chapter 3 Teaching CL-Quant
3.2 Procedure Creation Wizards >> 3.2.6 Measurement Procedure Update Wizard
To update or review an existing procedure, load the procedure and right click on it in the data
explorer and select “Review with Wizard” or “Update with Wizard”.
3. Review measurements
Here you can review the procedure’s measurements. If the procedure is updatable you may
modify the measurements.
5. Enable update
Lastly you can confirm the update status of the procedure. If update is enabled, you can
change the procedure to be non updatable.
121
Chapter 3 Teaching CL-Quant
3.2 Procedure Creation Wizards >> 3.2.7 Enhancement Procedure Wizard
NOTE:
You can exit the wizard and use the FOV crop tool to extract specific frames from the
multi-dimensional FOV, and then re-launch the wizard and use them for teaching.
122
Chapter 3 Teaching CL-Quant
3.2 Procedure Creation Wizards >> 3.2.7 Enhancement Procedure Wizard
- Image Enhancement:
A selection of powerful image enhancement functions (such as background normalization
and flatten background) which are commonly used in standard and template recipes.
- Mask Refinement:
A selection of powerful mask manipulation functions (i.e. partition and gating of single
components) which are commonly used in standard and template recipes.
- Advanced:
The complete set of image processing functions.
Some functions provide several key adjustable parameters. Use the slider bar or the textbox
to change the parameter values. Press “Execute” (green box, above) to apply the function to
the image. Note that the executed function is saved automatically in the operations history.
Click “Menu” to return to the function selection screen to add new enhancement functions.
123
Chapter 3 Teaching CL-Quant
3.2 Procedure Creation Wizards >> 3.2.7 Enhancement Procedure Wizard
In the example above, the “green” image at the current time (in Input1) is added to the
“green” image of the next frame (in Input2) and the resultant image is output to “Channel 3”.
124
Chapter 3 Teaching CL-Quant
3.2 Procedure Creation Wizards >> 3.2.7 Enhancement Procedure Wizard
- Delete an operation:
To delete an operation, right-click on the item in the operations history and select “Delete”
from the context menu. Note that changes to the image cannot be undone by deleting the
operation.
- Edit an operation:
To edit an enhancement function, double-click on the operation or right-click on the operation
and select “Edit” from the context menu. You will be taken to the edit mode where you can
modify the input, output, and key parameters. Press “Update” to save changes to the
function. Press “Menu” to return to the function selection screen.
NOTE:
You can edit another function while in the edit mode by clicking on a different operation.
- Apply an operation:
To apply a specific operation, select the operations from the list. Right-click on the selected
operations and choose “Apply” from the context menu. This function will apply the selected
operations.
Click “Finish” to complete the Enhancement Wizard and return to the main screen.
125
Chapter 3 Teaching CL-Quant
3.2 Procedure Creation Wizards >> 3.2.8 Enhancement Function Definitions
Enhance Texture
Adds texture to the image to better improve foreground-to-background separation.
Flatten BG
Normalizes the background intensity of an image to a desired value to correct tilted image
or uneven illumination.
Figure 1 Figure 2
126
Chapter 3 Teaching CL-Quant
3.2 Procedure Creation Wizards >> 3.2.8 Enhancement Function Definitions
Normalize Background
Normalizes the image intensity of an input image to a desired range. This function is
particularly useful for image sets with very different brightness.
NormalizeBackground (image input, int32 desiredRange, int32 ReferencePercentile,
int32 ImageHeight, image output)
Remove Background
Performs an open residue operation on the original image to remove the background of the
image.
RemoveBackground (image input, int32 kernelSize, image output)
Remove Texture DS
Performs iterative opening followed by closing to remove texture in the image. This
function also downsamples the image.
RemoveTextureDS (image input, int32 filterSize, image output, single percentDS, image
output)
Dilate
Takes the maximum value of all pixels in the kernel neighborhood of a pixel in the input
image and propagates the pixel maximum value to all the pixels in the kernel
neighborhood. The height and width of the kernel is the same.
Dilate (image input, int32 kernelSize, image output)
Erode
Takes the minimum value of all pixels in the kernel neighborhood of a pixel in the input
image and propagates the pixel minimum value to all the pixels in the kernel neighborhood.
The height and width of the kernel is the same.
Erode (image input, int32 kernelSize, image output)
127
Chapter 3 Teaching CL-Quant
3.2 Procedure Creation Wizards >> 3.2.8 Enhancement Function Definitions
Fill Holes
Fills in small, enclosed gaps in the input mask and stores result to output.
FillHoles (image input, int32 sizeValue, image output)
Guided Partition
Segments a contiguous binary input mask into discrete objects. Partition follows an input
reference seed mask. Partitioned mask is stored to output.
GuidedPartition (image input, image seed, bool fourConnected, image output)
Peak-Based Partition
Performs a distance transform operation on the input mask followed by iterative guided
partition to separate touching object masks.
PeakBasedPartition (image input, image output, int32 StartingLevel, int32 EndingLevel,
int32 largestSeed, int32 smallestSeed, int32 step)
Remove Objects
Removes input mask objects that fall below a specified cutoff size.
RemoveObjects (image input, int32 sizeValue, image output)
[Link] Advanced
Power users wishing to perform additional image enhancement operations may select
enhancement functions from the “Advanced” section. All enhancement functions in the
“Image Enhancement” and “Mask Refinement” sections can be found here along with
additional arithmetic, logic, morphological, and more post-processing functions.
(1) 3D operations
3D operations are used to process Z-stack images and generate intensity projection output in
2D.
HSEDF Projection
Takes an input Z-stack and generates a 2D extended depth of focus (EDF) intensity
projection image. The EDF projection combines the in-focus pixels of each Z-stack image
into one image, ensuring that the projection output is all in focus.
128
Chapter 3 Teaching CL-Quant
3.2 Procedure Creation Wizards >> 3.2.8 Enhancement Function Definitions
Descriptions of operations and their sub-operations are given below, along with their input
parameters.
Add
Combines the intensity of two inputs to generate an output image. Two sub-operations are
available:
Add (constant) This function takes an image and an integer as inputs and
(image input, int32 c, image adds the integer to the pixel intensity value of the input
output, arithmeticScheme) image.
Add (image) This function takes two images for inputs and combines
(image input1, image input2, their pixel intensity value and store results to output
image output, arithmeticScheme) image.
Divide
Divides the intensity of the first input by the second input to generate an output image.
Three sub-operations are available:
Divide (constant) This function divides the pixel intensity value of the input
(image input, int32 c, image image by the integer constant c and stores result to
output) output image.
Divide (decimal constant) This function divides the pixel intensity value of the input
(image input, single c, image image by a single precision float constant c and stores
output) result to output image.
Divide (image) This function divides the pixel intensity value of input
(image input1, image input2, image 1 by input image 2 and stores result to output
image output) image.
129
Chapter 3 Teaching CL-Quant
3.2 Procedure Creation Wizards >> 3.2.8 Enhancement Function Definitions
Rotate
Shifts the bits of each pixel in the input image by the number of bits specified by shiftCount.
Two functions are available:
LeftRotate (image input, int32 This function performs a left shift on the bits wrapped
shiftCount, image output) around from Most Significant Bit (MSB) to Least
Significant Bit (LSB) and stores result to output image.
RightRotate (image input, int32 This function performs a right shift on the bits wrapped
shiftCount, image output) around from LSB to MSB and stores result to output
image.
Shift
Moves the bits of each pixel in the input by the number of bits specified by shiftCount. Two
functions are available:
LeftShift (image input, int 32 This function performs a left shift on the bits with the MSB
shiftCount, image output) discarded and results are stored to output image.
RightShift (image input, int32 This function performs a right shift on the bits with the
shiftCount, image output) LSB discarded and results are stored to output image.
Multiply
Multiplies the intensity of the first input by the second input to generate an output image.
Three sub-operations are available:
Multiply (decimal constant) This function multiples the input image by a single
(image input, single c, image precision float constant c and stores result to output
output, arithmeticScheme) image.
Subtract
Subtracts intensity of the second input from the first input to generate an output image.
Two sub-operations are available:
Subtract (image) This function subtracts input image 2 from input image 1
(image input1, image input2, and stores result to output image.
image output, arithmeticScheme)
130
Chapter 3 Teaching CL-Quant
3.2 Procedure Creation Wizards >> 3.2.8 Enhancement Function Definitions
Descriptions of operations and their sub-operations are given below, along with their input
parameters.
Pixel And
Performs boolean “and” operation between the two inputs and stores result to output
image. Two sub-operations are available:
Pixel Not
Performs a boolean “not” operation between each pixel in the input image. This function
inverts the image input and stores result to output image. Function input parameters are
described below.
Pixel Or
Performs a boolean “or” operation between the two inputs and stores result to output
image. Two sub-operations are available:
131
Chapter 3 Teaching CL-Quant
3.2 Procedure Creation Wizards >> 3.2.8 Enhancement Function Definitions
Pixel Xor
Performs a boolean “exclusive or” (“xor”) operation between two inputs and stores result to
output image. Two sub-operations are available:
Descriptions of operations and their sub-operations are given below, along with their input
parameters.
Between
Checks each pixel in input and returns true (255) if pixel intensity is between the upper and
lower value and returns false (0) if outside the lower and upper value limits. Input
parameters of the between function is shown below.
Between (image input, int32 lower, int32 upper, image output)
Equals
Checks each pixel in the first input and returns true of the pixel intensity is equal to the
value of the second input. If equal, the function returns true (255) for that particular pixel
and false (0) if not equal. Two sub-operations are available:
132
Chapter 3 Teaching CL-Quant
3.2 Procedure Creation Wizards >> 3.2.8 Enhancement Function Definitions
Equals (image) This function checks the pixel intensity of each equivalent
(image input1, image input2, pixel in input images 1 and 2 and returns true if the
image output) equivalent pixel intensities are equal.
Equals (constant) This function checks the pixel intensity of each pixel in
(image input, int32 c, image the image input against an integer constant c and returns
output) true if the pixel intensity equals the value of c.
Greater Than
Checks each pixel in the first input and returns true if the pixel intensity is greater than the
value of the second input. If greater than, the function returns true for that particular pixel
and false if value is less than or equal to the first input. Two sub-operations are available:
GreaterThan (image) This function compares the pixel intensity of each pixel in
(image input1, image input2, input image 1 against the intensity of each equivalent
image output) pixel in image 2 and returns true if the equivalent pixel
intensity of input 2 is less than or equal to the pixel
intensity of input 1.
GreaterThan (constant) This function compares the pixel intensity of each pixel in
(image input, int32 c, image the input image against an integer constant c and returns
output) true if c is less than or equal to the pixel intensity.
GreaterThanOrEquals (image) This function compares the pixel intensity of each pixel in
(image input1, image input2, input image 1 against the intensity of each equivalent
image output) pixel in image 2 and returns true if the equivalent pixel
intensity of input 2 is less than the pixel intensity of input
1.
GreaterThanOrEquals (constant) This function compares the pixel intensity of each pixel in
(image input, int32 c, image the input image against an integer constant c and returns
output) true if c is less than the pixel intensity.
Less Than
Checks each pixel in the first input and returns true if the pixel intensity is less than the
value of the second input. If less than, the function returns true for that particular pixel and
false if value is more than or equal to the first input. Two sub-operations are available:
LessThan (image) This function c compares the pixel intensity of each pixel
(image input1, image input2, in input image 1 against the intensity of each equivalent
image output) pixel in image 2 and returns true if the equivalent pixel
intensity of input 2 is greater than or equal to the pixel
intensity of input 1.
LessThan (constant) This function compares the pixel intensity of each pixel in
(image input, int32 c, image the input image against an integer constant c and returns
output) true if c is greater than or equal to the pixel intensity.
133
Chapter 3 Teaching CL-Quant
3.2 Procedure Creation Wizards >> 3.2.8 Enhancement Function Definitions
LessThanOrEquals (image) This function c compares the pixel intensity of each pixel
(image input1, image input2, in input image 1 against the intensity of each equivalent
image output) pixel in image 2 and returns true if the equivalent pixel
intensity of input 2 is greater than the pixel intensity of
input 1.
LessThanOrEquals (constant) This function compares the pixel intensity of each pixel in
(image input, int32 c, image the input image against an integer constant c and returns
output) true if c is greater than the pixel intensity.
Lower Threshold
Checks each pixel in the input image and sets pixel intensity to an integer constant c if the
intensity is below the constant c. Pixel intensity higher than c will not be affected by this
function.
LowerThreshold (image input, int32 c, image output)
Maximum
Compares the pixel intensity of the two inputs and stores the higher intensity value of the
two to output. Two sub-operations are available:
Maximum (constant) This function compares the pixel intensity of the input
(image input, int32c, image image and integer constant c and stores the higher value
output) of the two to output.
Minimum
Compares the pixel intensity of the two inputs and stores the lower intensity value of the
two to output. Two sub-operations are available:
Minimum (constant) This function compares the pixel intensity of the input
(image input, int32c, image image and integer constant c and stores the lower value
output) of the two to output.
134
Chapter 3 Teaching CL-Quant
3.2 Procedure Creation Wizards >> 3.2.8 Enhancement Function Definitions
Not Equals
Checks each pixel in the first input and returns true of the pixel intensity is not equal to the
value of the second input. If not equal, the function returns true (255) for that particular
pixel and false (0) if equal. Two sub-operations are available:
NotEquals (image) This function checks the pixel intensity of each equivalent
(image input1, image input2, pixel in input images 1 and 2 and returns true if the
image output) equivalent pixel intensities are not equal.
NotEquals (constant) This function checks the pixel intensity of each pixel in
(image input, int32 c, image the image input against an integer constant c and returns
output) true if the pixel intensity do not equal the value of c.
Upper Threshold
Checks each pixel in the input image and sets pixel intensity to an integer constant c if the
intensity is greater than the constant c. Pixel intensity lower than c will not be affected by
this function.
UpperThreshold (image input, int32 c, image output)
Overview
Distance transform functions output a topographical map of an input mask, and encode the
pixel distance to the mask boundary.
Given an image I, the distance transform assigns each foreground pixel (inside the mask)
the shortest distance between the pixel and a background pixel (outside the mask).
The distance transform can be computed sequentially by a two-pass procedure. The first
(forward) pass scans in a left-right top-bottom raster scan order. The second (backward)
pass scans in a reverse right-left bottom-top order. In the forward pass for a foreground
distance transform, the output U(x,y) at pixel position (x,y) is determined by the value of input
image I and the values at previously computed positions of U by
135
Chapter 3 Teaching CL-Quant
3.2 Procedure Creation Wizards >> 3.2.8 Enhancement Function Definitions
D( x, y ) min{ min
F
( D(i, j ) l (i, j )),U ( x, y )}
( i , j )N ( x , y )
NF(x,y) is the forward neighbors of the pixel (x,y). The forward neighbor contains a selected
set of adjacent neighbors that are already scanned in the backward pass. For 4-connected
city block distance, NF(x,y) is the lower half of the 4-connected neighbor. That is, NF(x,y) =
{(x+1,y), (x,y+1)}. For 8-connected chessboard distance and Euclidean distance, NF(x,y) is
the lower half of the 8-connected neighbor. That is, NF(x,y) = {(x+1,y), (x+1,y+1), (x,y+1),
(x-1,y+1)}.
The images in Figure 1, illustrates the three distance operators applied to a Euclidean disc.
Figure 1(a) shows the Euclidean disc; Figure 1(b) shows the Euclidean distance transformed
image of (a); Figure 1(c) shows the 4-connected (cityblock) distance transformed image of
(a); Figure 1(d) shows the 8-connected (chessboard) distance transformed image of (a).
The images in Figure 2, illustrates bounded and unbounded effect. Figure 2(a) shows a
Euclidean disc hole; Figure 2(b) shows the Euclidean distance transformed image of (a)
when the distance is bounded by the image boundary; Figure 2(c) shows the Euclidean
distance transformed image of (a) when the distance is unbounded by the image boundary.
136
Chapter 3 Teaching CL-Quant
3.2 Procedure Creation Wizards >> 3.2.8 Enhancement Function Definitions
Usage
In CL-Quant, the three distance transform operations - 4-connected, 8-connected, and
Euclidean - that defines how the pixel distances are measured. The list of distance transform
operations are shown below.
NOTE:
All distance transform functions now have the option of 8-bit and 16-bit output depth in the
outputDepth dropdown menu. It is highly recommended that 8-bit depth used for object
radius less than 255. For very large object with radius exceeding 255-px, use of 8-bit
output will truncate the object intensity near the center.
Application
We will describe two typical applications of distance transform (A) Medial Axis; (B) Seeds for
partition.
137
Chapter 3 Teaching CL-Quant
3.2 Procedure Creation Wizards >> 3.2.8 Enhancement Function Definitions
Figure 3
Each point on the medial axis has an associated value that is proportional to the time it
took for the fire to reach the given point from the time the grass fire was set. The medial
axis with its medial axis distance function is called the medial axis transform. It is an
information preserving representation of shape. To see this, just consider running the
grass fire backward. With time running in reverse, set a grass fire on each point of the
medial axis exactly at the time the original grass fire is extinguished at that point. The
boundary of the fire at time t=0 would be the boundary of the original given shape.
The medial axis can be produced by first calculating the distance transform of the image.
The medial axis then lies along the singularities (i.e. creases or curvature discontinuities)
in the distance transform. The medial axis is often described as being the `locus of local
maxima’ on the distance transform. If the distance transform is displayed as a 3-D
surface plot with the third dimension representing the grayscale value, the medial axis
can be imagined as the ridges on the 3-D surface. Figure 4 shows a rectangular region
(a) and its Euclidean distance transformed image (b). Figure 4 (c) shows the overlay (in
red) of the peak detection on the distance image. The peak is detected by morphological
opening residue as follows:
138
Chapter 3 Teaching CL-Quant
3.2 Procedure Creation Wizards >> 3.2.8 Enhancement Function Definitions
Descriptions of operations and their sub-operations are given below, along with their input
parameters.
Average Filter
Averages the pixel intensity of each pixel in the input image with its neighboring pixels
within the filter window defined by its kernel size. Two sub-operations are available:
AverageFilter1 (image input, int32 This function averages the pixel intensity of each pixel in
kernelSize, image output) the input image with its neighboring pixels within the filter
window. Both dimensions of the filter window are the
same size and defined by one integer constant.
AverageFilter2 (image input, int32 This function averages the pixel intensity of each pixel in
kernelWidth, int32 kernelHeight, the input image with its neighboring pixels within the filter
image output) window. The dimensions of the filter window are
determined by two integer constants.
Binomial Filter
Performs a binomial weighted average of the pixel intensity of each pixel in the input image
with its neighboring pixels within the filter window defined by its kernel size. Two
sub-operations are available:
139
Chapter 3 Teaching CL-Quant
3.2 Procedure Creation Wizards >> 3.2.8 Enhancement Function Definitions
Descriptions of operations and their sub-operations are given below, along with their input
parameters.
Enhance Texture
Adds texture to the image to better improve foreground-to-background separation.
EnhanceTexture (image input, bool useDiscKernel, int32 discKernelSize, int32
xKernelSize, int32 yKernelSize, int32 enhanceFactor, single downsamplePct, image
output)
Flatten BG
Normalizes the background intensity of an image to a desired value to correct tilted image
or uneven illumination.
Figure 1 Figure 2
140
Chapter 3 Teaching CL-Quant
3.2 Procedure Creation Wizards >> 3.2.8 Enhancement Function Definitions
Three Flatten BG functions are available to correspond to the kernel size used:
FlattenBG, Large (image input, This function flattens image background to the desired
int32 bgValue, image output) bgValue using a kernel size of 255.
FlattenBG, Medium (image input, This function flattens image background to the desired
int32 bgValue, image output) bgValue using a kernel size of 65.
FlattenBG, Small (image input, This function flattens image background to the desired
int32 bgValue, image output) bgValue using a kernel size of 7.
Normalize Background
Normalizes the image intensity of an input image to a desired range. This function is
particularly useful for image sets with very different brightness.
NormalizeBackground (image input, int32 desiredRange, int32 ReferencePercentile,
int32 ImageHeight, image output)
Remove Background
Performs an open residue operation on the original image to remove the background of the
image.
RemoveBackground (image input, int32 kernelSize, image output)
Remove Texture DS
Performs iterative opening followed by closing to remove texture in the image. This
function also downsamples the image.
RemoveTextureDS (image input, int32 filterSize, image output, single percentDS, image
output)
141
Chapter 3 Teaching CL-Quant
3.2 Procedure Creation Wizards >> 3.2.8 Enhancement Function Definitions
Descriptions of operations and their sub-operations are given below, along with their input
parameters.
ConvertImageDepths1 (image This function converts pixel depth of an image using one
input, mappingType, image of five mapping options.
output)
ConvertImageDepths2 (image This function converts pixel depths of an image and maps
input, int32 sourceMin, int32 the source minimum and maximum pixels values to
destMin, int32 sourceMax, int32 specified destination min and max values.
destMax, image output)
Convert To 8-bit
Converts an input image with 16-bit pixel depth to an 8-bit image. Source upper and lower
intensity percentile in the 16-bit domain is mapped to their corresponding 8-bit values.
ConvertTo8-bit (image input, single lowerPercentile, single upperPercentile, single
lowerScale, single upperScale)
142
Chapter 3 Teaching CL-Quant
3.2 Procedure Creation Wizards >> 3.2.8 Enhancement Function Definitions
Convert To 16-bit
Converts an input image with 8-bit pixel to a 16-bit image, allowing pixel intensity values up
to 65535.
ConvertTo16-bit (image input, image output)
Copy Image
Creates a duplicate copy of the input image and stores to output.
CopyImage (image input, image output)
Downsample
Takes an input image and downsamples it to a new, reduced size in the output image. Four
downsample sub-operations are available:
DownsampleBySize (image input, This function downsamples the input image to new width
int32 newWidth, int32 newHeight, and height, measured in pixels as shown in Figure 1.
bool saveOriginal, bool
applyToAll)
DownsampleByPercent (using This function downsamples the input image to new width
average filter)(image input, single and height, measured in percentage with respect to its
percentFactor, image output) original resolution.
DownsampleByPercent (image This function downsamples the input image to new width
input, single percentWidth, single and height, measured in percentage with respect to its
percentHeight, image output) original width and height as shown in Figure 2.
DownsampleByPercent (in-place) This function downsamples the input image to new width
(image input, single and height, measured in percentage with respect to its
percentWidth, single original width and height, and stores output to the input
percentHeight, bool saveOriginal, image.
bool applyToAll)
Fill Image
Creates a uniform output image based on a specified statistic of pixel intensity of the input
image.
FillImage (image input, statistic, bool ignoreZero, image output)
Resample
Takes an input image and resamples it to a new size in the output image. This function can
be used both for up- and downsampling the input image depending on input parameters.
Two sub-operations are available:
Resample1 (image input, image This function resamples an input image or mask and
output) save result to an output mask.
Resample2 (image input, int32 This function resamples the input image to a different
newWidth, int32 newHeight, dimension and stores result to output.
output image)
143
Chapter 3 Teaching CL-Quant
3.2 Procedure Creation Wizards >> 3.2.8 Enhancement Function Definitions
Restore
Takes an image that has been downsampled and restores the original FOV. FOV masks,
object ROIs and measurements will be upsampled. The active FOV will be selected as its
input as shown in Figure 3.
Restore (image input, bool applyToAll)
NOTE:
The restore function recipe must be created at the same time as the downsample recipe,
otherwise there will be no image available for restoration. These can be saved to one or
two recipes.
Shift Image
Translates the image from original to new position specified by x- and y-shifts and stores
result to output. Image is not wrapped around in translation and blank pixels from the
movement will be set to specified uniform intensity value.
ShiftImage (image input, image output, int32 xShift, int32 yShift, int32 padValue)
Transpose
Transposes the input image and stores result to output. When the function is applied, rows
of the input image will become columns of the output and the columns from input will
become the rows of the output.
Transpose (image input, image output)
144
Chapter 3 Teaching CL-Quant
3.2 Procedure Creation Wizards >> 3.2.8 Enhancement Function Definitions
Group Objects
Assigns labeled values to each object on the input reference mask based on the zone of
influence.
GroupObjects (image referenceMask, image objectMask, bool useZoi, image output)
Labeling (4-Connected)
Assigns unique labeled values for each binary object in the input image and stores the
assigned labels to output. Pixels are considered to be connected and of the same object if
it is adjacent to a non-zero pixel in the four cardinal directions. Three sub-operations are
available:
145
Chapter 3 Teaching CL-Quant
3.2 Procedure Creation Wizards >> 3.2.8 Enhancement Function Definitions
Labeling (8-Connected)
Assigns unique labeled values for each binary object in the input image and stores the
assigned labels to output. Pixels are considered to be connected and of the same object if
it is adjacent to a non-zero pixel in the four cardinal directions and the four ordinal
directions. Three sub-operations are available:
146
Chapter 3 Teaching CL-Quant
3.2 Procedure Creation Wizards >> 3.2.8 Enhancement Function Definitions
Logical And
Performs pixel-wise logical “and” operation between the first and second input and returns
a binary output of non-zero pixels on both inputs. Two logical and sub-operations are
available:
Logical Not
Performs pixel-wise logical “not” operation on the input image and stores binary result to
output. This function will return a value of 255 for all zero pixels and 0 for all non-zero
pixels.
LogicalNot (image input, image output)
Logical Or
Performs pixel-wise logical “or” operation between the first and second input and returns a
binary output of non-zero pixels of either inputs. Two logical or sub-operations are
available:
147
Chapter 3 Teaching CL-Quant
3.2 Procedure Creation Wizards >> 3.2.8 Enhancement Function Definitions
Overview
Morphological operations process images based on shapes. It applies a structuring element
to the input image to modify the neighborhood surrounding the pixel of interest. By choosing
the size and shape of the structuring element, one can construct a morphological operation
that is sensitive to specific shapes in the input image.
The basic morphological operations are dilation and erosion. Binary erosion removes pixels
on object boundaries while binary dilation adds pixels to the boundaries of object masks. The
number of pixels added or removed from the object masks depends on the size and shape of
the structuring element.
- Binary Erosion
Let E be a Euclidean space or an integer grid, and B a binary image in E. The erosion of B
by the structuring element S is defined by:
B S {z E | S z B}
BΘS=
U B− s
s∈S
- Binary Dilation
The dilation of B by the structuring element S can be similarly defined by:
B S Bs
sS
Figure 1(a) shows a rectangular shape. The erosion of the rectangular shape by a disk
structuring element (illustrated in Apricot), resulting in the Medium purple smaller
rectangular region in (b). The dilation of the rectangular shape in (a) by the same disk,
resulting in the addition of cyan region with rounded corners in (c).
148
Chapter 3 Teaching CL-Quant
3.2 Procedure Creation Wizards >> 3.2.8 Enhancement Function Definitions
- Binary Opening
The opening of B by S is obtained by the erosion of B by S, followed by dilation of the
resulting image by S:
B S (B S ) S
The opening is also given by,
B S S i
Si B
which means that it is the locus of translations of the structuring element S inside the image
B. In the case of the square of side 20, and a disc of radius 4 as the structuring element,
the opening is a square of side 20 with rounded corners, where the corner radius is 4.
- Binary Closing
The closing of B by S is obtained by the dilation of B by S, followed by erosion of the
resulting image by S:
B S (B S ) S
The closing can also be obtained by
B S ( B c S s )c
The above means that the closing is the complement of the locus of translations of the
symmetric of the structuring element outside the image B.
Opening together with closing are the basic workhorse of morphological noise removal.
Opening removes small objects, while closing removes small holes.
Figure 2(a) shows the opening of the rectangular shape in Figure 1(a) by a disk structuring
element (illustrated in Apricot), resulting in the yellow rectangular region with rounded
corners. Figure 2(b) shows a region of two overlapped circles. The closing of the shape (b)
by the same disk, resulting in the addition of pink regions that fill the shape corners in (c).
149
Chapter 3 Teaching CL-Quant
3.2 Procedure Creation Wizards >> 3.2.8 Enhancement Function Definitions
Denoting a one-dimensional image by I(x) and the structuring function by S(x), the
grayscale dilation of I by S is given by
( I S )( x) max{I ( x z ) S ( x)}
zS
( I S )( x) min{I ( x z ) S ( x)}
zS
The operations can be directly extended to 2-D images. Figure 4(a) shows a neuron growth
cone image with bright microtubule plus tips. The dilation of the image by a rod structuring
element of radius 7 resulting in the image in Figure 4(b) where bright spots are expanded.
The erosion of the image by the same structuring element resulting in the image in Figure
4(c) where bright spots are much reduced. the opening of the image by the same
structuring element resulting in the image in Figure 4(d) where the bright spots are dimmed.
the closing of the image by the same structuring element resulting in the image in Figure
4(e) where the cellular background is brighter since darker textures are filled in.
150
Chapter 3 Teaching CL-Quant
3.2 Procedure Creation Wizards >> 3.2.8 Enhancement Function Definitions
(a) (b)
(c) (d)
(e)
Figure 4
151
Chapter 3 Teaching CL-Quant
3.2 Procedure Creation Wizards >> 3.2.8 Enhancement Function Definitions
Usage
The list of morphological operations appears below. The variations are mostly related to
different configurations of structuring elements.
Descriptions of operations and their sub-operations are given below, along with their input
parameters.
Angular
Performs specified operation on the input image with structuring kernel specified by the
angle in radians and stores result to output image. Four functions are available:
152
Chapter 3 Teaching CL-Quant
3.2 Procedure Creation Wizards >> 3.2.8 Enhancement Function Definitions
Close
Performs a dilation operation followed by an erode operation using the same structuring
element. Two sub-operations are available:
Close Disc
Performs a closing operation using a disc-shaped structuring element instead of a square
kernel.
CloseDisc (image input, int32 kernelSize, image output)
Dilate
Takes the maximum value of all pixels in the kernel neighborhood of a pixel in the input
image and propagates the pixel maximum value to all the pixels in the kernel
neighborhood. Two sub-operations are available:
Dilate 3x3
Performs a dilation operation with a 3x3 structuring element of specific arrangement and
stores result to output image. Three dilate 3x3 operations are available:
153
Chapter 3 Teaching CL-Quant
3.2 Procedure Creation Wizards >> 3.2.8 Enhancement Function Definitions
Dilate Diagonal
Performs dilation operation with an n x n diagonal structuring element and stores result to
output image. Two dilate diagonal operations are available:
Figure 8 Figure 9
Dilate Disc
Performs a dilation operation with a disc-shaped structuring element and stores result to
output image.
DilateDisc (image input, int32 kernelSize, image output)
Erode
Takes the minimum value of all pixels in the kernel neighborhood of a pixel in the input
image and propagates the pixel maximum value to all the pixels in the kernel
neighborhood. Two sub-operations are available:
154
Chapter 3 Teaching CL-Quant
3.2 Procedure Creation Wizards >> 3.2.8 Enhancement Function Definitions
Erode 3x3
Performs erosion operation with a 3x3 structuring element of specific arrangement and
stores result to output image. Three erode 3x3 operations are available:
Erode Diagonal
Performs erosion operation with an n x n diagonal structuring element and stores result to
output image. Two erode diagonal operations are available:
Erode Disc
Performs an erosion operation with a disc-shaped structuring element and stores result to
output image.
ErodeDisc (image input, int32 kernelSize, image output)
Open
Performs erosion operation followed by a dilation operation using the same structuring
element. Two sub-operations are available:
Open Disc
Performs an opening operation using a disc-shaped structuring element instead of a
square kernel.
OpenDisc (image input, int32 kernelSize, image output)
155
Chapter 3 Teaching CL-Quant
3.2 Procedure Creation Wizards >> 3.2.8 Enhancement Function Definitions
Application
Morphological operations are the basis for our structure-guided image processing approach.
In the following subsections, we will describe structure-guided image feature enhancement
and structure-guided image feature extraction.
(((I A) B) (I B) B
Figure 10 (c) illustrates the effect of opening and closing by A. Note that only partial
resulting profile is shown in the illustration. Figure 10 (d) illustrates the effect of further
opening and closing by B. The resulting feature enhanced image is shown in Figure 10 (e).
The feature enhancement process removes noise and preserves the structure of the
features of interest. There is no blur, ringing, overshoot or pre-shoot normally caused by
phase distortion of linear filtering.
The structure-guided feature enhancement process could start with grayscale opening
followed by grayscale closing or start with grayscale closing and followed by opening.
Opening first will enhance dark features and closing first will enhance bright features. Each
opening and closing iteration could use the same size structuring element for detailed
feature refinement or could use increased size structuring element for aggressive feature
refinement. Elongated structuring element of orthogonal directions could be alternatively or
sequentially applied in the enhancement processing sequence.
156
Chapter 3 Teaching CL-Quant
3.2 Procedure Creation Wizards >> 3.2.8 Enhancement Function Definitions
(a) Input: noisy edge profile (b) Structuring elements (c) Opening/closing by A
The structure-guided image feature extraction approach efficiently extracts image features
of interest and removes noisy and irrelevant information. This is accomplished by a
sequence of grayscale morphological processing that encodes structure information into
directional elongated structuring elements that can be efficiently implemented.
I-IΘA
157
Chapter 3 Teaching CL-Quant
3.2 Procedure Creation Wizards >> 3.2.8 Enhancement Function Definitions
IA-I
Figure 12 illustrates the grayscale dilation residue operation applied to the one
dimensional ramp edge I shown in Figure 11(a). Figure 12(a) shows the dilated profile of
I by A. The dilation residue result is shown in Figure 12(b). As shown in Figure 12,
grayscale morphological dark edge detection does not introduce undesired phase shift
or blurry effect.
IA-IΘA
Figure 13 illustrates the difference of grayscale dilation and erosion operation applied to
the one dimensional ramp edge I shown in Figure 11(a).
Figure 13 (a) shows the dilated and eroded profile of I by A. The difference of grayscale
dilation and erosion result is shown in Figure 13(b). As shown in Figure 13, grayscale
morphological edge detection does not introduce undesired phase shift or blurry effect.
158
Chapter 3 Teaching CL-Quant
3.2 Procedure Creation Wizards >> 3.2.8 Enhancement Function Definitions
I-IOA
Figure 14
159
Chapter 3 Teaching CL-Quant
3.2 Procedure Creation Wizards >> 3.2.8 Enhancement Function Definitions
I●A-I
Figure 14 illustrates grayscale closing residue applied to the one dimensional image
profile shown in Figure 14 (a). Figure 14 (b) shows the closing result of I. The closing
residue result is shown in Figure 14(c). As can be seen in Figure 14(c), grayscale
morphological line/region detection does not introduce undesired phase shift or blurry
effect.
I●A-IOA
Figure 14 illustrates the difference of grayscale closing and opening applied to the one
dimensional image profile shown in Figure 14(a). Figure 14 (b) shows the closing and
opening results of I. The difference of grayscale closing and opening is shown in Figure
14(e). As can be seen in Figure 14(e), the morphological region contrast extraction does
not introduce any undesired phase shift or blurry effect.
IOA-IΘA
Similarly, dark region boundary can be defined as the difference between grayscale
morphological dilation and closing:
IA-I●A
And general region boundary can be defined as the difference between the summation
of grayscale morphological opening and dilation and the summation of grayscale
morphological erosion and closing:
(I O A + I A) - (I Θ A + I ● A)
160
Chapter 3 Teaching CL-Quant
3.2 Procedure Creation Wizards >> 3.2.8 Enhancement Function Definitions
(a) Input step edge image (b) Horizontal structuring element H (c) I H - I Θ H
161
Chapter 3 Teaching CL-Quant
3.2 Procedure Creation Wizards >> 3.2.8 Enhancement Function Definitions
(a) Input frame image (b) Horizontal structuring element H (c) Closing residue: I ● H – I
Figure 16
The extraction of features from any directions can be accomplished with the
structure-guided feature extraction approach and the features extracted from multiple
directions can be combined by a union (maximum) of multiple directional features.
Furthermore, two-dimensional structuring elements of different size and shape can be
used to extract desired regions.
Histogram Equalize
Redistributes input image intensities to utilize the full dynamic range available. Gain for
each gray level is separately determined by its frequency of occurrence in the input. Result
is stored to the output image.
HistogramEqualize (image input, image output)
162
Chapter 3 Teaching CL-Quant
3.2 Procedure Creation Wizards >> 3.2.8 Enhancement Function Definitions
Fill Holes
Fills in small, enclosed gaps in the input mask and stores result to output.
FillHoles (image input, int32 sizeValue, image output)
Remove Objects
Removes input mask objects that fall below a specified cutoff size.
RemoveObjects (image input, int32 sizeValue, image output)
163
Chapter 3 Teaching CL-Quant
3.2 Procedure Creation Wizards >> 3.2.8 Enhancement Function Definitions
Figure 1 (a) shows an image consisting of 4 circular component regions. In Figure 1 (b), the
foreground pixels in the component regions are labeled by their component labels. Its
background (non-component regions) can be partitioned into zones of influence (ZOI) as
shown in Figure 1 (c). In the ZOI image, each pixel is associated with a component label.
The background pixels are labeled by the labels of the component that is closest to the pixel.
Figure 1(d) shows the ZOI image overlaid with the component regions.
Usage
In CL-Quant, there are three ComputeZOI operations - 4-connected, 8-connected, and
Euclidean - that defines how the ZOI distances are measured. There are two partition
operations - Guided Partition and Multi-Guided Partition. The operations selection menu is
shown below.
164
Chapter 3 Teaching CL-Quant
3.2 Procedure Creation Wizards >> 3.2.8 Enhancement Function Definitions
Compute ZOI
Takes a binary input image and generates a grayscale Zones of Influence (ZOI) map
where each ZOI is determined based on their distance between neighboring objects.
Three sub-operations are available:
Guided Partition
Segments a contiguous binary input mask into discrete objects. Partition follows an input
reference seed mask. Partitioned mask is stored to output.
GuidedPartition (image input, image seed, bool fourConnected, image output)
Multi-Guided Partition
Multi-guided partition function takes three reference seed masks as inputs. It also inputs a
size range containing a lower bound and an upper bound. It stores result to output. The
operation includes three iterations:
(1) 1st iteration: performs guided partition on regions from the input image that is larger
than the size lower bound using the largeSeed and retain resulting regions that are
within the size range in the output image. The remaining regions greater than the
upper bound will be continued partitioned in the 2nd iteration.
(2) 2nd iteration: performs guided partition on remaining large regions using the
smallSeed and retain resulting regions that are within the size range in the output
image. The remaining regions greater than the upper bound will be continued
partitioned in the 3rd iteration.
(3) 3rd iteration: performs guided partition on remaining large regions using the
adaptiveSeed and retain all resulting to be included in the output image.
Multi-GuidedPartition (image input, image largeSeed, image smallSeed, image
adaptiveSeed, sizeRange, image output)
165
Chapter 3 Teaching CL-Quant
3.3 Additional Procedures >> 3.3.1 Tracking Procedure Wizard
Other types of key image recognition procedures include motility tracking and decision
procedures. These are described in the sections below.
NOTE:
Tracking procedure creation requires the “Live Tracking” license.
Track
A track is the path traveled by a tracked object. The track links individual objects at each time
frame. Tracking measurements characterize the tracks in several different ways:
- Trajectory measurements:
Characterize the track in its entirety. These are one per track for all time. For example,
straight line length is the distance the track object travels “as the crow flies”.
- Motility measurements:
Characterize the track at each time point. For example, velocity angle measurement 4 in
the figure below is the angle between the direction of the track object at frame 4 and frame 3.
- Polar measurements:
With polar measurements, the object mask is transformed from the image (x,y) coordinates
to polar coordinates in (r, ) where r is the distance from the center of the object and is the
angle from the image x axis ( = 0 is parallel to the x axis). Measurements are then made on
the mask in the polar domain.
166
Chapter 3 Teaching CL-Quant
3.3 Additional Procedures >> 3.3.1 Tracking Procedure Wizard
167
Chapter 3 Teaching CL-Quant
3.3 Additional Procedures >> 3.3.1 Tracking Procedure Wizard
Tracking Parameters
Sample window size Defines the width (number of frames) of a moving window in
which measurements, such as velocity, are calculated.
Min object size Objects in the segmentation mask below this size (pixels) are
not tracked.
Max search range Specifies the search radius (in number of pixels) for
connecting track points from previous track point position.
Split Threshold Threshold for split detection. A lower threshold value will bias
toward more splits and vice versa.
Merge threshold Threshold for merge detection. A lower threshold value will
bias toward more merges and vice versa.
Min frames for overlapping Minimum threshold number of image frames for which two
touching objects are considered in an overlapped state.
Min trajectory length Minimum threshold number of frames that a tracked object
must appear in to be considered a trajectory.
Tracking Options
Object Split Given one object, object A in the previous frame, the object A
is split if two objects, object B1 and object B2, matches object
A in the current frame.
- Allow split
Two new tracks will be created in the current frame for a
trajectory that satisfies the condition above. Select this option
for cells undergoing division.
- Ignore split
A new track will be created for one of the new objects in the
current frame for a trajectory that satisfies the condition
above; the other new object will inherit the previous trajectory.
Object Merge Given two objects, object A1 and object A2 in the previous
frame, the trajectories of objects A1 and A2 are merged if one
object, object B, matches both A1 and A2 in the current frame.
168
Chapter 3 Teaching CL-Quant
3.3 Additional Procedures >> 3.3.1 Tracking Procedure Wizard
- Ignore merge
Object masks that satisfy the conditions above will be merged
to a single mask and inherit the trajectory of one of the
merged components in the current frame.
Enable lineage interface When this option is checked, a track object will be ended
when a mask component separates (i.e. cell division), and two
new tracks (i.e. daughter cells) will be created. This option
allows for tracking lineage (previous generation) of splitting
cells.
When this option is enabled, lineage measures (parent / child
track number, generation, descendants) are automatically
created. These measures are defined in the next section.
Enable robust measurement When this option is checked, track measurements of large cell
clusters will be separated from track measurements of
isolated cells.
Object-to-object overlap Given an object A in the previous frame and object B in the
current frame, one of two methods is used to determine
whether A “overlaps” B, and therefore, is a possible match for
B:
1) The pixels of A intersect the pixels of B, or
2) The zone of influence (ZOI) of A intersects the pixels of
B or the ZOI of B intersects the pixels of A.
This option specifies which method to use.
If this option is TRUE, method 1) is used, otherwise, the less
restrictive method 2) is used. The default value is TRUE.
In situations where the objects move slowly, i.e., there’s pixel
overlap from frame to frame, then method 1) may be better
since it is more restrictive about possible matches. In
situations where objects move quickly and the distance
moved between frames is larger than the size of the object
(no pixel overlaps), then method 2) must be used.
Remove short track when When two objects are considered in a merge state, checking
merging this option will terminate the shorter track at the merging time
point.
When you are done with parameter configurations, click “Next” to define track measurements.
169
Chapter 3 Teaching CL-Quant
3.3 Additional Procedures >> 3.3.1 Tracking Procedure Wizard
- Motility Measurements:
Motility measurements are state metrics for each track point at time t.
- Trajectory Measurements:
Trajectory measurements are metrics for the entire track up to t.
- Running Window Stats:
Running window stats are track measurements within the sample window size.
- Field Measurements
Field measurements provide metrics for counting object overlaps for all tracks combined.
Predecessor For a track at time t, the Object ID of the object on the track at time
t-1.
Successor For a track at time t, the Object ID of the object on the track at time
t+1.
Average intensity Mean of the pixel values measured in the track object mask at time
t.
Median intensity The median of the pixel values measured in the track object mask
at time t. The median is the number separating the higher half of
the pixel values from the lower half.
Top 5% intensity The top (or high) 5% intensity value measured in the track object
mask at time t.
Top 10% intensity The top (or high) 10% intensity value measured in the track object
mask at time t.
Top 15% intensity The top (or high) 15% intensity value measured in the track object
mask at time t.
Top 20% intensity The top (or high) 20% intensity value measured in the track object
mask at time t.
170
Chapter 3 Teaching CL-Quant
3.3 Additional Procedures >> 3.3.1 Tracking Procedure Wizard
Velocity magnitude The displacement of the track object from t-1 to t divided by the
time increment.
Velocity angle The angle between the track object from t-1 to t.
Match score This is the matching confidence over time. 1 is perfect confidence.
As a cell touches other cells or enters into a cluster of cells its
match score will fall.
Compactness The square of the perimeter of the track object mask divided by
the object size.
Major to minor ratio The ratio of the major axis of the track object mask to the minor
axis of the track object mask.
Ellipse fitting angle Angle of fitting ellipse (of the object mask) to x axis of FOV.
Max polar radius Maximum radius of the points along the mask from the origin
(object center) when transformed in polar coordinates.
Min polar radius Minimum radius of the points along the mask from the origin
(object center) when transformed in polar coordinates.
Norm mean radius Mean polar radius divided by maximum polar radius.
Process count Number of peaks in the track object mask in polar coordinates.
Norm process mean radius Mean of all process max polar radii, divided by the track object’s
max polar radius.
Relative process angle Mean of the relative process angle divided by the track object
mean ellipse fitting angle.
Relative process angle std The standard deviation of the relative process angles.
Polar local maximum radii Local maximum radii of points along the mask centered at the
origin when transformed into polar coordinates. A maximum of ten
(10) local maxima may be displayed.
Polar local minimum radii Local minimum radii of points along the mask centered at the
origin when transformed into polar coordinates. A maximum of ten
(10) local minima may be displayed.
Total Time Total time span of a track (from starting to end point)
171
Chapter 3 Teaching CL-Quant
3.3 Additional Procedures >> 3.3.1 Tracking Procedure Wizard
Straight Line Length Length of the straight line connecting track starting and end points
Curvature Rate Sum of angle change within the length of the track, it includes the
sign to reflect the direction of change
Bounding Box Area The area of a bounding box generated from the minimum and
maximum values of x- and y-coordinates over track lifetime.
Overlapped Frame Count Total number of frames of which clustering of two masks is
observed.
Mean Overlapped Velocity Mean velocity of clustered mask objects in the overlapped state.
StdDev of Overlapped Standard deviation of the velocity of clustered mask objects in the
Velocity overlapped state.
172
Chapter 3 Teaching CL-Quant
3.3 Additional Procedures >> 3.3.1 Tracking Procedure Wizard
Mean Overlapped Velocity Mean velocity of all tracked objects in overlapped state.
173
Chapter 3 Teaching CL-Quant
3.3 Additional Procedures >> 3.3.2 Decision Procedures
NOTE:
Decision procedure creation requires the Decision module license configuration. Decision
procedures cannot be applied to classify trajectories.
174
Chapter 3 Teaching CL-Quant
3.3 Additional Procedures >> 3.3.2 Decision Procedures
Click to
create a
category.
The category name can be modified, and the category can be removed (by clicking the red X
icon under the “Add Label” icon). At least two categories are required for creating a decision
procedure. When you are finished creating your classification categories, click “Next.”
Edit Labels
Create a teaching set by highlighting a class in the control window, and clicking on object ROIs
(this process is called labeling). When you click on a ROI, that object is assigned the
highlighted class. Click on the ROI a second time to un-assign the class, or you can assign the
“None” class. You can label objects in as many FOVs as you want.
In this example, we have labeled several representative object ROIs for each class (yellow is
“Responders” and green is “Non-responders”). These labeled objects comprise the teaching
set. The software will use this information to create decision rules.
175
Chapter 3 Teaching CL-Quant
3.3 Additional Procedures >> 3.3.2 Decision Procedures
If you are satisfied with your decision teaching, click “Next” to continue and to finish the
decision wizard.
176
Chapter 3 Teaching CL-Quant
3.3 Additional Procedures >> 3.3.2 Decision Procedures
In this example, the tree has three classes, “Responder”, “Non Responder”, and “Artifact”.
The tree is able to classify the user labeled teaching set with 1.64% error. Each node in the
tree is a rule. The rule is applied to the incoming objects. If the answer to the rule is yes, then
the object moves down to the next node below, if the answer is no, then the object moves up
to the next node above. The object moves through the tree in this fashion until it reaches a
terminal node where its classification class (as opposed to its taught or labeled class) is
assigned. Note that each class can have more than one terminal node.
The figure above shows the first rule in the tree, Rule 78. If an object’s Cytoplasm - Nucleus
Area difference is greater than -14.0, it will move down to the next rule, rule 23. Otherwise it
will move up to the next rule, Rule 33. The node statistics tell you the total number of samples
in the node, and the number belonging to each of the three classes. The order corresponds
to the class listing at the top of the tree panel. Here, the order is “Responder”, “Non
Responder”, then “Artifact” class. The additional 0 spots are for unused classes. Since this is
177
Chapter 3 Teaching CL-Quant
3.3 Additional Procedures >> 3.3.2 Decision Procedures
top node, all the statistics for the teaching set are shown. 61 total objects were labeled, 29
Responder class, 29 Non Responder class and 3 Artifact class.
Clicking on any node in the tree will highlight the objects overlays on the images.
NOTE:
The tree only shows the classification performance on the teaching set objects where
the truth is known. The error rate tells you how well the tree performs given that you
know “the truth”. To ascertain the tree performance on all the FOV objects you must
apply the procedure to the training FOVs to classify all the objects (shown below).
The figure below shows one of the tree’s terminal nodes. Objects that arrive in this terminal
node are classified as “Non Responder”. In this case, there are 23 teaching set objects in the
node. Of these 0 were taught as “Responder”, 22 were taught as “Non Responder” and 1 was
taught as an “Artifact” (this is the only error in the tree).
Contingency table
The contingency table presents the tree’s overall classification accuracy on the teaching set.
Read from left to right along the rows, it tells you the classification error for the labeled
teaching sets. Read from top to bottom down the columns, it tells you the classification error
for the classes. The tree only has one misclassified object, but its error can be thought of in
these two different ways. Reading along the row we see that of three labeled “Artifacts”, one
was mis-classified. Reading down the column we see that of 30 classified “Non Responders”,
one was actually an “Artifact”.
To understand the error case better, we can go to the parallel coordinates plot.
178
Chapter 3 Teaching CL-Quant
3.3 Additional Procedures >> 3.3.2 Decision Procedures
Parallel plot
You can populate the plot objects by clicking on a node in the tree. The example plot below
shows the objects in the terminal node shown above, which contains the tree’s only error. As
described above this node contains 23 objects classified as “Non Responders”, of these 1
object was labeled as “Artifact”.
The vertical bar represents the measurement used at the decision nodes leading to the
terminal node. Since this node is in a shallow position, there are only two measurements
used to reach it. The parallel coordinate plot enables you to see the distribution of the node
objects with respect to the measurements. Each object is represented by a line. The “Non
Responder” labeled objects are shown in green, and the “Artifact” object is shown in red. The
shading on the bars represents a histogram of the entire teaching set population distribution
for the measurement; darker shading indicates a higher frequency of occurrence.
Right clicking on the tree will bring up the context menu for manual tree modifications as
shown below. The first two entries will toggle the contingency table and parallel coordinate
plot. The remaining items are described below.
179
Chapter 3 Teaching CL-Quant
3.3 Additional Procedures >> 3.3.2 Decision Procedures
Auto Prune
This operation will automatically prune the tree.
180
Chapter 3 Teaching CL-Quant
3.4 Other Procedure >> 3.4.1 FOV Import Procedure
CL-Quant provides additional utility procedures such as basic enhancement procedures and
FOV import procedures.
To create the procedure, right click on the “FOV import procedures” folder in the data explore
and select “New”. This will create a new import procedure data object. Double click on the data
object to launch the procedure in a new window.
The FOV import procedure has the same organization as the FOV RFrame except that the FOV
list panel is treated differently. In the normal FOV RFrame, the FOV list panel shows the FOVs in
an RFrame list. In this case, the panel shows the folder or images on file that will be imported to
the channel or mask. To specify individual images or folders for importing to the FOV, simply drag
and drop them from the Windows Explorer. Most supported image types can be used.
For example, say you want to create a three-channel movie. On the file system you have the
individual files arranged in folders. To create an import procedure, you simply drag and drop the
first folder from file into the FOV list panel. The folder on file will be associated with Channel 0,
and when you execute the procedure, images from that folder will be loaded into Channel 0. To
create the second channel, right click on the channel strip and select “Add Channel”. Repeat the
same steps as for Channel 0 until all the channels are created, and their import directories or
images defined.
You can add masks to the channels in the same way, but first be sure to click on the mask strip
and select “Add Mask”. You’ll notice when you click on the new mask tab that the image list
panel is now blank. After adding files from file system, the image list panel will show the images
associated with Mask 0 (if you click on the “Channel 0” tab, it will show the images associated
with Channel 0). To create a second mask, just right click on the mask strip and select “Add
Mask”. Masks can be added, removed/deleted, and renamed. In addition, you may change the
color of the mask. When the procedure is executed the imported masks will be assigned the
selected color.
At any time, you may click on “Clear List” or “Cancel” to erase your channel or mask import
specification.
Relative path
You may also check the “Relative path” box. In this mode, the directory structure will be
interpreted relative to the location of the import procedure. This allows you to create movable
procedures which do not rely on the absolute file system path. If it is kept unchecked, the
procedure will always import files from the same absolute file path.
Load as FOVs
When this option is checked, the files will be loaded into FOVs or an FOV list. If unchecked,
they will be loaded into a single time-lapse FOV.
181
Chapter 3 Teaching CL-Quant
3.4 Other Procedure >> 3.4.1 FOV Import Procedure
Load as T Movie
When this option is selected (red box) the files will be loaded into a single time-lapse FOV in T
series. Note that the dimension of the images imported must be the same to use this option.
Load as Z Movie
When this option is selected (red box) the files will be loaded into a single time-lapse FOV in Z
series. Note that the dimension of the images imported must be the same to use this option.
182
En Symbol for separate collection applicable in European countries No Symbol for kildesortering i europeiske land
This symbol indicates that this product is to be collected separately. Dette symbolet indikerer at produktet skal kildesorteres.
The following apply only to users in European countries. De nedenstående punktene gjelder for alle europeiske brukere.
• This product is designated for separate collection at an appropriate • Dette produktet skal kildesorteres og innleveres til dedikerte innsam-
collection point. Do not dispose of as household waste. lingspunkter. Må ikke kastes med normalt husholdningsavfall.
• For more information, contact the retailer or the local authorities in • For mer informasjon, ta kontakt med din forhandler eller lokale myn-
charge of waste management. digheter.
De Symbol für getrennte Wertstoff-/Schadstoffsammlung in europäischen Ländern Se Symbol för separat upphämtning i euopeiska länder
Dieses Symbol zeigt an, dass dieses Produkt separat entsorgt werden Den här symbolen anger att produkten måste hämtas separat.
muss. Följande gäller bara användare i europeiska länder.
Folgendes gilt für Verbraucher in europäischen Ländern: • Den här produkten är avsedd för separat upphämtning vid ett lämpligt
• Dieses Produkt darf nur separat an einer geeigneten Sammelstelle uppsamlingsställe. Produkten får inte kastas i hushållsavfall.
entsorgt werden. Eine Entsorgung im Hausmüll ist unzulässig. • För mer information, kontakta återförsäljaren eller de lokala myndig-
• Wenden Sie sich für nähere Informationen bitte an Ihren Händler oder heter som ansvarar för avfallshantering.
die örtlich für Abfallentsorgung zuständigen Behörden.
Fr Symbole pour la collecte sélective applicable aux pays européens Fi Erillisen keräyksen merkki Euroopan maissa
Ce symbole indique que ce produit doit être collecté séparément. Tämä merkki osoittaa, että tuote kerätään erikseen.
Les mesures suivantes concernent uniquement les utilisateurs euro- Seuraavat maininnat koskevat vain eurooppalaisia käyttäjiä.
péens. • Tämä tuote kerätään erikseen asianmukaisista keräyspisteistä. Älä
• Ce produit doit être jeté séparément dans un point de collecte appro- hävitä tuotetta talousjätteiden mukana.
prié. Ne jetez pas ce produit dans une poubelle réservée aux ordures • Lisätietoja saat jälleenmyyjältä tai paikallisilta jätehuoltoviranomaisilta.
ménagères.
• Pour plus d’information, contactez le détaillant ou les autorités locales
responsables de la gestion des ordures.
Es Símbolo para recogida separada aplicable en países Europeos Ru Символ сортировки мусора, использующийся в европейских странах
Este símbolo indica que este producto se recogerá por separado. Данный символ означает, что этот продукт должен
Lo siguiente sólo se aplicará en países Europeos. утилизироваться отдельно от других.
• Este producto ha sido designado para su recogida en un punto de al- Приведенная ниже информация касается только пользователей
macenamiento apropiado. No lo tire como un deshecho doméstico. из стран Европы.
• Para más información, contacte con el vendedor o autoridades locales • Данный продукт должен утилизироваться отдельно от других в
al cargo de la gestión de residuos. соответствующих приемных пунктах. Не выбрасывайте данный
продукт вместе с бытовым мусором.
• Дополнительную информацию Вы можете получить у продавца
или у местных властей, отвечающих за утилизацию мусора.
Dk Symbol for special bortskaffelse af denne type produkter i de europæiske lande Gr Σύµ��λ� για την �ε�ωριστή απ�κ�µιδή απ�ρριµµάτων στις Ευρωπαϊκές �ώρες
Dette symbol angiver, at dette produkt skal bortskaffes specielt. Αυτ� τ� σύµ��λ� υπ�δηλώνει �τι η απ�κ�µιδή αυτ�ύ τ�υ πρ�ϊ�ντ�ς
Det efterfølgende er kun til forbrugere i de europæiske lande. πρέπει να γίνει �ε�ωριστά.
• Dette produkt skal bortskaffes på fx en genbrugsplads el .lign. Det Τα κάτωθι απευθύν�νται µ�ν� σε Ευρωπαί�υς �ρήστες.
må ikke smides væk som normalt husholdningsaffald. • Αυτ� τ� πρ�ϊ�ν είναι σ�εδιασµέν� έτσι ώστε να γίνεται η
• For yderligere information kontakt din forhandler eller de lokale απ�κ�µιδή τ�υ σε ειδικά σηµεία. Μην τ� πετάτε µα�ί µε τα
myndigheder, som fx teknisk forvaltning. υπ�λ�ιπα απ�ρρίµµατα.
• Για περισσ�τερες πληρ���ρίες, επικ�ινωνήστε µε τ�ν διαν�µέα
τ�υ πρ�ϊ�ντ�ς ή µε τις υπεύθυνες τ�πικές αρ�ές για θέµατα
δια�είρισης απ�ρριµµάτων.
Nl Symbool voor gescheiden inzameling zoals dat wordt gebruikt in Europese landen Pl Symbol oznaczający segregowanie odpadów, stosowany w krajach Europy
Dit symbool betekent dat dit product apart moet worden ingezameld. Ten symbol oznacza, że produkt musi być wyrzucany oddzielnie.
Het volgende is alleen van toepassing op gebruikers in Europa Poniższe uwagi mają zastosowanie tylko dla użytkowników z Europy.
• Dit product dient gescheiden ingezameld te worden op een daartoe • Ten produkt jest przeznaczony do oddzielnej utylizacji i powinien być
bestemd inzamelpunt. Niet wegwerpen bij het normale huisvuil. dostarczony do odpowiedniego punktu zbierającego odpady. Nie
• Neem voor meer informatie contact op met het verkooppunt, of met należy go wyrzucać z odpadami gospodarstwa domowego.
de lokale instantie die verantwoordelijk is voor het verwerken van • Aby uzyskać więcej informacji, należy skontaktować się z
afval. przedstawicielem przedsiębiorstwa lub lokalnymi władzami
odpowiedzialnymi za zarządzanie odpadami.
Pt Símbolo para recolha de resíduos em separado utilizado nos países Europeus Hu Európai országokban érvénes “Elkülönített hulladékgyűjtés” jelzése
Este símbolo indica que este produto é para ser recolhido separada- Ez a jelzés azt jelenti, hogy ezt a terméket elkülönítve kell gyűjteni.
mente. Az alábbiak csak az európai országokban élő felhasználókra érvényes.
Esta norma aplica-se só para os utilizadores nos países Europeus. • Ezt a terméket a megfelelő hulladékgyűjtőhelyen, elkülönítve kell
• Este produto está designado para recolha de resíduos em separado num gyűjteni. Ne dobja ki háztartási hulladékként.
recipiente apropriado. Não deitar no caixote do lixo doméstico. • További információkért forduljon a forgalmazóhoz, vagy a helyi
• Para mais informações, contactar o revendedor ou as autoridades hatóság hulladékgyűjtésért felelős részlegéhez.
locais responsáveis pela gestão dos resíduos.
It Simbolo per la raccolta differenziata applicabile nei paesi europei Cz Symbol pro oddělený sběr odpadu platný v evropských zemích
Questo simbolo indica che il prodotto va smaltito separatamente. Tento symbol znamená, že tento produkt se má odkládat odděleně.
La normativa che segue si applica soltanto agli utenti dei paesi europei. Následující pokyny platí pro uživatele z evropských zemí.
• Il prodotto è designato per lo smaltimento separato negli appositi • Tento produkt se má odkládat na místě sběru k tomuto účelu určeném.
punti di raccolta. Non gettare insieme ai rifiuti domestici. Neodhazujte spolu s domácím odpadem.
• Per maggiori informazioni, consultare il rivenditore o gli enti locali • Více informací o způsobu zacházení s nebezpečným odpadem vám
incaricati della gestione dei rifiuti. podá příslušná místní instituce.
Jp 䝬䞀䝱䝇䝕䛱䛐䛗䜑ᗣᲘ∸ืᅂ䛴䜻䝷䝠䝯䝢䞀䜳
This symbol is provided for use in the People’s Republic of China, 䛙䛴䜻䝷䝠䝯䝢䞀䜳䛵ᮇဗ䛒ื䛱ᅂ䛛䜒䛰䛗䜒䛶䛰䜏䛰䛊䛙䛮䜘
for environmental protection in the fields of electronic information ♟䛝䛬䛊䜄䛟䚯
products. ḗ㡧䛵ᮇဗ䜘䝬䞀䝱䝇䝕 䟺㻨㻸䟻 䛭⏕䛟䜑ሔྙ䛱䛴䜅㐲⏕䛛䜒䜄䛟䚯
䛙䛴䝢䞀䜳䛵䚮 ୯ᅗ䛴䛐ᐂᵕ䛱ྡྷ䛗䛥䜈䛴䛭䚮 㞹Ꮔሒဗฦ㔕䛱 •㻃 ᮇဗ䛵ᣞᏽ䛛䜒䛥㞗ሔᡜ䛭ื䛱ᅂ䛛䜒䜑䜎䛌䛱ᏽ䜇䜏䜒䛬
䛐䛗䜑⎌ሾಕ㆜䜘┘Ⓩ䛮䛝䛬䛊䜄䛟䚯 䛊䜄䛟䚯 ᐓᗖ䜸䝣䛮䛝䛬ᗣᲘ䛝䛰䛊䛭䛕䛦䛛䛊䚯
•㻃 リ⣵䛱䛪䛊䛬䛵㈅⌦ᗉ䜄䛥䛵ᆀᇡ䛴ᗣᲘ∸ฌ⌦ᶭ㛭䛱䛚㏻⤙
䛕䛦䛛䛊䚯
住所 / ADDRESS
産業機器 / Industrial Instruments バイオサイエンス / Bioscience
株式会社ニコン 株式会社ニコン
〒100-8331 〒100-8331
東京都千代田区有楽町 1-12-1 新有楽町ビル 東京都千代田区有楽町 1-12-1 新有楽町ビル
インストルメンツカンパニー 産業機器マーケティング部 営業課 インストルメンツカンパニー バイオサイエンスマーケティング部 営業課
電話:(03)3216-2384 電話:(03)3216-2375
インストルメンツカンパニー 産業機器マーケティング部 販売促進課 インストルメンツカンパニー バイオサイエンスマーケティング部 販売促進課
電話:(03)3216-2371 電話:(03)3216-2360
株式会社ニコンインステック 株式会社ニコンインステック
本 社 本 社
〒100-0006 〒100-0006
東京都千代田区有楽町 1-12-1(新有楽町ビル 4F) 東京都千代田区有楽町 1-12-1(新有楽町ビル 4F)
電話:(03)3216-9171 (産業機器) 電話:(03)3216-9163 (バイオサイエンス)
名古屋営業所 札幌営業所
〒465-0093 〒060-0051
名古屋市名東区一社 3-86(クレストビル 2F) 札幌市中央区南 1 条東 2-8-2(SR ビル 8F)
電話:(052)709-6851 (バイオサイエンス・産業機器) 電話:(011)281-2535 (バイオサイエンス)
関 西 支 店 仙台営業所
〒532-0003 〒980-0014
大阪市淀川区宮原 3-3-31(上村ニッセイビル 16F) 仙台市青葉区本町 1-1-1(三井生命仙台本町ビル 19F)
電話:(06)6394-8802 (産業機器) 電話:(022)263-5855 (バイオサイエンス)
九 州 支 店 名古屋営業所
〒913-0034 〒465-0093
福岡市東区多の津 1-4-1 名古屋市名東区一社 3-86(クレストビル 2F)
電話:(092)611-1111 (バイオサイエンス・産業機器) 電話:(052)709-6851 (バイオサイエンス・産業機器)
関 西 支 店
NIIKON INSTRUMENTS INC. 〒532-0003
1300 Walt Whitman Road, Melvile, N.Y. 11747-3064, U.S.A. 大阪市淀川区宮原 3-3-31(上村ニッセイビル 16F)
tel. +1-631-547-8500 電話:(06)6394-8801 (バイオサイエンス)
NIKON INSTRUMENTS EUROPE B.V. 九 州 支 店
Tripolis 100, Burgerweeshuispad 101, 1076 ER Amsterdam, 〒913-0034
The Netherlands 福岡市東区多の津 1-4-1
tel. +31-20-7099-000 電話:(092)611-1111 (バイオサイエンス・産業機器)
NIKON INSTRUMENTS(SHANGHAI)CO.,LTD.
tel. +86-21-6841-2050 NIIKON INSTRUMENTS INC.
1300 Walt Whitman Road, Melvile, N.Y. 11747-3064, U.S.A.
NIKON SINGAPORE PTE LTD tel. +1-631-547-8500
tel. +65-6559-3618
NIKON INSTRUMENTS EUROPE B.V.
NIKON MALAYSIA SDN BHD Tripolis 100, Burgerweeshuispad 101, 1076 ER Amsterdam,
tel. +60-3-7809-3688 The Netherlands
tel. +31-20-7099-000
NIKON INSTRUMENTS KOREA CO.,LTD.
tel. +82-2-2186-8410 NIKON INSTRUMENTS(SHANGHAI)CO.,LTD.
tel. +86-21-6841-2050
NIKON INDIA PRIVATE LIMITED
tel. +91-124-4688500 NIKON SINGAPORE PTE LTD
tel. +65-6559-3618
NIKON CANADA INC.
tel. +1-905 602 9676 NIKON MALAYSIA SDN BHD
tel. +60-3-7809-3688
NIKON FRANCE S.A.S.
tel. +33-1-4516-45-16 NIKON INSTRUMENTS KOREA CO.,LTD.
tel. +82-2-2186-8410
NIKON GMBH
tel. +49-211-941-42-20 NIKON INDIA PRIVATE LIMITED
tel. +91-124-4688500
NIKON INSTRUMENTS S.p.A.
tel. +39-55-3009601 NIKON CANADA INC.
tel. +1-905 602 9676
NIKON AG
tel. +41-43 277-28-67 NIKON FRANCE S.A.S.
tel. +33-1-4516-45-16
NIKON UK LTD.
tel. +44-208-247-1717 NIKON GMBH
tel. +49-211-941-42-20
NIKON GMBH AUSTRIA
tel. +43-1-972-6111-00 NIKON INSTRUMENTS S.p.A.
tel. +39-55-3009601
NIKON BELUX
tel. +32-2-705-56-65 NIKON AG
tel. +41-43 277-28-67
NIKON METROLOGY, INC.
12701 Grand River Avenue, Brighton, MI 48116 U.S.A. NIKON UK LTD.
tel. +1-810-220-4360 tel. +44-208-247-1717
sales_us@[Link]
NIKON GMBH AUSTRIA
NIKON METROLOGY EUROPE NV tel. +43-1-972-6111-00
Geldemaaksebaan 329, 3001 Leuven, Belgium
tel. +32-16-74-01-00 NIKON BELUX
sales_europe@[Link] tel. +32-2-705-56-65
NIKON METROLOGY GMBH
tel. +49-6023-91733-0
sales_germany@[Link]