ITK Pipeline
← ITK Concepts Used | ● | Thresholding with ITK →
Medical Processing in ITK is organized as a series of filters, the processing pipeline.
Each filter does a particular job, and sends the processed data to the next stage of the pipeline.
Each filter has inputs and outputs. Configuring the pipeline is done by connecting the inputs of one filter to the output of its predecessor.
Unlike the concept of a linear pipeline which gets fed with data at its head, ITK follows a different approach, a pull-pipeline:
- The pipeline does not have to be linear, meaning that a filter can have multiple inputs being connected to different predecessors.
- Therefore it cannot be determined a-priori which data needs to be feed into the graph heads.
- As a solution the pull-pipeline is pulled at the end, resulting in pull-requests being propagated to all its inputs, which propagate the request further until a reader is reached. Then it is clear that a particular data chunk is needed by a particular processing job and the data is sent to the first filtering stage, which consumes it and sends the processed data further down the pipeline until it reaches the end of the pipeline, where the pull started.
Each filter is derived from the templated base class
N linearily connected filters:
ReaderType::Pointer reader = ReaderType::New();
Filter1Type::Pointer filter1 = Filter1Type::New();
Filter2Type::Pointer filter2 = Filter2Type::New();
...
FilterNType::Pointer filterN = Filter2Type::New();
// connect input of filters to output of predecessors
filter1->SetInput(reader->GetOutput());
filter2->SetInput(filter1->GetOutput());
...
filterN->SetInput(filterN-1->GetOutput());
Pull the pipeline:
Processed data is available as image on:
Usually, the last filter is connected the output of an image writer.
Note: Graph must not contain loops.
← ITK Concepts Used | ● | Thresholding with ITK →