Task Composition |
The installation and provisioning process is driven by gathering a task list from the available scopes in preparation for task execution.
In preparation for gathering a task list, an ordered list of scopes is collected. The following process of collecting scopes is repeated once for each stream specified in the project page or in the workspace:
For each list of scopes, an ordered list of tasks is collected.
Initially, for each scope in the list,
three variables are induced,
one each for the name, label, and description
attributes of the scope
where the variable name is prefixed with the scope type
as follows:
scope.product.catalog
scope.product
scope.product.version
scope.project.catalog
scope.project
scope.project.stream
scope.installation
scope.workspace
scope.user
null
, the name is used as the label value
and if the scope's description is null
, the scope's label is used as the description value.
In addition, to the name
variable,
for each product, product version, project, and project stream
an additional variable with the name suffix .qualifier
is induced
where the value is the qualified name
of the scope.
For example,
the value of the scope.project.stream.name.qualified
variable of the Oomph.setup project's master stream
is org.eclipse.oomph.master
All these induced variables are added, in scope order, to the initial gathered list of tasks.
Additional tasks are gathered into the task list from the ordered scopes by visiting each contained task of each scope as follows:
The task list is processed to induce additional tasks, to override and merge tasks, to evaluate and expand variables and to reorder tasks. The members of the task list that are variables induce an initial set of keys, i.e., a set of all variable names. Oomph tasks are modeled with EMF, so each task instance knows it corresponding EMF class. During the initial phase processing, the list of tasks is analyzed to determine the set of EMF classes that are required to implement all the tasks in the list. Each EMF class is processed as follows:
Further processing proceeds as follows:
For the initial phase processing, all the tasks are efficiently copied, including the copying of the containing scopes. The copying process takes the task-to-task substitution map into account, i.e., each task is logically replaced in the copy by its merged override. As such, only the final overridden merged task remains in the resulting task list copy and all references to the overridden and overriding tasks will reference the final merged override instead. Further processing of the task list proceeds with this copied task list.
An explicit key map, i.e., a map from variable name to variable, is computed by visiting each variable in the task list. Note that the preceding copying process will have eliminated duplicate variables. The initial phase processing then proceeds by visiting each task with a non-empty ID attribute as follows:
.explicit
from the explicit annotations of the attribute,
and also change the self-referencing variable's value to refer to that explicit variable.
The final phase processes a task list that is a concatenation of task lists produced from the initial phase, or just the one task list already processed by the initial phase. As such, it's working with task copies for which all variables have been expanded and eliminated. The processing for this phase augments the substitution map by analyzing the task list for structural duplicates. It then applies those substitutions, i.e., overriding and merging duplicate tasks, thereby reducing the task list before further processing.
The processing of the task list, particularly task overriding and merging, affects the overall order of the task list such that it's different from the original authored order gathered from the scopes. Not only that, when multiple streams are involved, final phase processing is dealing with a concatenated list in which the tasks must be properly reordered. To support that, each task has an intrinsic priority; the task list is primarily sorted according to that priority. Each task also specifies predecessors and successors; the task list is secondarily sorted to respect that induced partial order. After these two sorting steps, the tasks in the list are modified to clear both the predecessors and successors and then the predecessors are set to form a chain that induces an overall order that's exactly this final order of sorted task list; this chain excludes variables. This chain of dependencies ensures that the final phase processing, which deals with the concatenated task lists, will properly interleave the tasks (because of the priority sorting) while also respecting the per-stream order of the multiple streams.
Each task that excludes the current trigger is removed from the task list. Note that the task list gathering process gathers all tasks because the task list is analyzed to determine which tasks need to be installed for all possible triggers. So for bootstrap bootstrap trigger, even the tasks that can't execute until they're running in an installed product are analyzed to ensure that, once the product is installed, the tasks that will need to perform in that installation, i.e., for startup or manual trigger, are properly installed. The processing of all tasks also implies that at bootstrap time, all variables that will be needed in the running installed product will be prompted early and hence will already be available in the running installed product.
The final task list processing step removes all variables from the task list and consolidates each remaining task. At this point, the tasks in the list are ready to be performed.