A business process starts with an event and moves toward a result. Viewflow separates workflow data from business data to keep your code organized.
Business data—your users, orders, products—is stable. You design it carefully and store it in normalized tables. Workflow data—current step, decisions, intermediate values—changes as the process runs.
The Process and Task models have two generic foreign key
fields: seed and artifact.
Think of a workflow as a journey. seed is the starting input. artifact
is the final output.
class SelectSeedForm(forms.ModelForm):
seed = forms.ModelChoiceField(
queryset=Source.objects.filter(...)
)
def save(self, commit=True):
self.instance.seed = self.cleaned_data["seed"]
return super().save(commit=commit)
class Meta:
model = Process
fields = []
In your flow:
class SampleFlow(flow.Flow):
start = flow.Start(
CreateProcessView.as_view(
form_class=SelectSourceForm,
),
).Next(...)
Process and Task models have a data JSONField for workflow-related values.
This data only matters during the process—intermediate calculations, user
choices, temporary flags.
Viewflow lets you treat JSONField contents like regular Django model fields. Define proxy models with virtual fields:
class ApprovmentProcess(Process):
# Stored as process.data['approved'], accessible as process.approved
approved = jsonstore.BooleanField(default=False)
class Meta:
proxy = True
class AprovementFlow(flows.Flow):
...
approve = (
flow.View(
views.UpdateProcessView.as_view(fields=["approved"])
)
.Next(...)
)
For more control, inherit from the Process or Task models directly. You can
also inherit from AbstractTask or AbstractProcess for full
customization.
Direct inheritance adds a database join when loading the model:
class ShipmentProcess(Process):
carrier = models.ForeignKey(Carrier, on_delete=models.CASCADE)
To keep tasks independent, initialize them with seed and data from previous tasks:
class MyFlow(flow.Flow):
# Copy start task artifact to next_task.seed
start = (
flow.Start(SeedSelectionView.as_view())
.Next(this.next_task, task_seed=lambda activation: activation.task.artifact)
)
# Create task with preinitialized data
next_task = (
flow.If(
cond=lambda activation: activation.task.seed.fresh
)
.Then(
this.plant,
task_data=lambda activation: {'seed': activation.task.seed}
)
.Else(
this.eat,
task_data=lambda activation: {'grain': activation.task.seed}
)
)
Each task gets what it needs without depending on the full process state.
When splitting work, initialize each branch with its own data:
split = (
flow.Split()
.Next(
this.process_post,
task_data_source=lambda activation: {'post': post} for post in activation.process.data['posts'],
)
.Next(this.join)
)
Each process_post task gets its own post to work on.
Subprocesses can also receive seed and data:
publish_post = flow.Subprocess(
PublishFlow.as_subprocess,
process_seed=lambda activation: activation.process.artifact,
).Next(this.end)