Sensors allow you to instigate runs based on any external state change.
Name | Description |
---|---|
@sensor | The decorator used to define a sensor. The decorated function is called the evaluation_fn . The decorator returns a SensorDefinition |
RunRequest | The sensor evaluation function can yield one or more run requests. Each run request creates a pipeline run. |
SkipReason | If a sensor evaluation doesn't yield any run requests, it can instead yield a skip reason to log why the evaluation was skipped or why there were no events to be processed. |
SensorDefinition | Base class for sensors. You almost never want to use initialize this class directly. Instead, you should use the @sensor which returns a SensorDefinition |
Sensors are definitions in Dagster that allow you to instigate runs based on some external state change automatically. For example, you can:
Sensors have several important properties:
RunRequest
objects. Each run request launches a run.SkipReason
, which specifies a message which describes why no runs were requested.The Dagster Daemon runs each sensor evaluation function on a tight loop. If you are using sensors, make sure to follow the instructions on the DagsterDaemon page to run your sensors.
To define a sensor, use the @sensor
decorator. The decorated function is called the execution_fn
and must have context
as the first argument. The context is a SensorExecutionContext
Let's say you have a pipeline that logs a filename that is specified in the solid configuration of the process_file
solid:
from dagster import solid, pipeline
@solid(config_schema={"filename": str})
def process_file(context):
filename = context.solid_config["filename"]
context.log.info(filename)
@pipeline
def log_file_pipeline():
process_file()
You can write a sensor that watches for new files in a specific directory and yields
a RunRequest
for each new file in the directory. By default, this sensor every 30 seconds.
import os
from dagster import sensor, RunRequest
@sensor(pipeline_name="log_file_pipeline")
def my_directory_sensor(_context):
for filename in os.listdir(MY_DIRECTORY):
filepath = os.path.join(MY_DIRECTORY, filename)
if os.path.isfile(filepath):
yield RunRequest(
run_key=filename,
run_config={"solids": {"process_file": {"config": {"filename": filename}}}},
)
This sensor iterates through all the files in MY_DIRECTORY
and yields
a RunRequest
for each file.
Once my_directory_sensor
is added to a repository
with log_file_pipeline
, it can be enabled and used.
When instigating runs based on external events, you usually want to run exactly one pipeline run for each event. There are two ways to define your sensors to avoid creating duplicate runs for your events: using run_key
and using a cursor.
In the example sensor above, the RunRequest
is constructed with a run_key
.
yield RunRequest(
run_key=filename,
run_config={"solids": {"process_file": {"config": {"filename": filename}}}},
)
Dagster guarantees that for a given sensor, at most one run is created for each RunRequest
with a unique run_key
. If a sensor yields a new run request with a previously used run_key
, Dagster skips processing the new run request.
In the example, a RunRequest
is requested for each file during every sensor evaluation. Therefore, for a given sensor evaluation, there already exists a RunRequest
with a run_key
for any file that existed during the previous sensor evaluation. Dagster skips processing duplicate run requests, so Dagster launches runs for only the files added since the last sensor evaluation. The result is exactly one run per file.
Run keys allow you to write sensor evaluation functions that declaratively describe what pipeline runs should exist, and helps you avoid the need for more complex logic that manages state. However, when dealing with high-volume external events, some state-tracking optimizations might be necessary.
When writing a sensor that deals with high-volume events, it might not be feasible to yield
a RunRequest
during every sensor evaluation. For example, you may have an s3
storage bucket that contains thousands of files.
When writing a sensor for such event sources, you can use run keys to maintain a cursor that limits the number of yielded run requests for previously processed events. The sensor API provides the last evaluated run key to serve as this cursor:
last_run_key
: A cursor field on SensorExecutionContext
that returns the run key of the last run requested by a previous sensor evaluation.Here is an example of our directory file sensor using last_run_key
as a cursor for updated files.
def build_run_key(filename, mtime):
return f"{filename}:{str(mtime)}"
def parse_run_key(run_key):
parts = run_key.split(":")
return parts[0], float(parts[1])
@sensor(pipeline_name="log_file_pipeline")
def my_directory_sensor_cursor(context):
last_mtime = parse_run_key(context.last_run_key)[1] if context.last_run_key else None
for filename in os.listdir(MY_DIRECTORY):
filepath = os.path.join(MY_DIRECTORY, filename)
if os.path.isfile(filepath):
fstats = os.stat(filepath)
file_mtime = fstats.st_mtime
if file_mtime > last_mtime:
# the run key should include mtime if we want to kick off new runs based on file modifications
run_key = build_run_key(filename, file_mtime)
run_config = ({"solids": {"process_file": {"config": {"filename": filename}}}},)
yield RunRequest(run_key=run_key, run_config=run_config)
By default, the Dagster Daemon runs a sensor 30 seconds after the previous sensor evaluation finishes executing. You can configure the interval using the minimum_interval_seconds
argument on the @sensor
decorator.
It's important to note that this interval represents a minimum interval between runs of the sensor and not the exact frequency the sensor runs. If you have a sensor that takes 2 minutes to complete, but the minimum_interval_seconds
is 5 seconds, the fastest Dagster Daemon will run the sensor is every 2 minutes and 5 seconds. The minimum_interval_seconds
only guarantees that the sensor is not evaluated more frequently than the given interval.
For example, here are two sensors that specify two different minimum intervals:
@sensor(pipeline_name="my_pipeline", minimum_interval_seconds=30)
def sensor_A(_context):
yield RunRequest(run_key=None, run_config={})
@sensor(pipeline_name="my_pipeline", minimum_interval_seconds=45)
def sensor_B(_context):
yield RunRequest(run_key=None, run_config={})
These sensor definitions are short, so they run in less than a second. Therefore, you can expect these sensors to run consistently around every 30 and 45 seconds, respectively.
For debugging purposes, it is often useful to describe why a sensor might not yield any runs for a given evaluation. The sensor evaluation function can yield a SkipReason
with a string description that will be displayed in Dagit.
For example, here is our directory sensor that now provides a SkipReason when no files are encountered:
@sensor(pipeline_name="log_file_pipeline")
def my_directory_sensor_with_skip_reasons(_context):
has_files = False
for filename in os.listdir(MY_DIRECTORY):
filepath = os.path.join(MY_DIRECTORY, filename)
if os.path.isfile(filepath):
yield RunRequest(
run_key=filename,
run_config={"solids": {"process_file": {"config": {"filename": filename}}}},
)
has_files = True
if not has_files:
yield SkipReason(f"No files found in {MY_DIRECTORY}.")
To quickly preview what an existing sensor would generate when evaluated, you can run the CLI command dagster sensor preview my_sensor_name
.
You can monitor and operate sensors in Dagit. There are multiple views that help with observing sensor evaluations, skip reasons, and errors.
To view the sensors page, click the "All sensors" in the left-hand navigation pane. Here you can turn sensors on and off using the toggle.
If you click on any sensor, you can monitor all sensor evaluations and runs created:
If your sensor throws an error or yields a skip reason, the sensor timeline view will display more information about the errors and skips:
A useful pattern is to create a sensor that checks for new AssetMaterialization
events for a particular asset key. This can be used to kick off a pipeline that computes downstream assets or notifies appropriate stakeholders.
One benefit of this pattern is that it enables cross-pipeline and even cross-repository dependencies. Each pipeline run instigated by an asset sensor is agnostic to the pipeline event that caused it.
Here is an example of a sensor that listens for asset materializations for a given asset key my_table
:
from dagster import AssetKey
@sensor(pipeline_name="my_pipeline")
def my_asset_sensor(context):
events = context.instance.events_for_asset_key(
AssetKey("my_table"), after_cursor=context.last_run_key, ascending=False, limit=1
)
if events:
record_id, event = events[0] # take the most recent materialization
yield RunRequest(
run_key=str(record_id), run_config={}, tags={"source_pipeline": event.pipeline_name}
)
If you want to act on pipeline failures, e.g., you need to send an alert to a monitoring service on pipeline failure. You can write a sensor that monitors Dagster's runs table and launches a specialized "alert" pipeline for each failed run.
For example, you can write an "alert" pipeline that sends a slack message when it runs. Note that the pipeline depends on a slack
resource:
@solid(required_resource_keys={"slack"})
def slack_message_on_failure_solid(context):
message = f"Solid {context.solid.name} failed"
context.resources.slack.chat.post_message(channel="#foo", text=message)
@pipeline(
mode_defs=[
ModeDefinition(name="test", resource_defs={"slack": ResourceDefinition.mock_resource()}),
ModeDefinition(name="prod", resource_defs={"slack": slack_resource}),
]
)
def failure_alert_pipeline():
slack_message_on_failure_solid()
Then, you can define a sensor that fetches the failed runs from the runs table via context.instance
, and instigates a failure_alert_pipeline
run for every failed run. Note that we use the failed run's id as the run_key
to prevent sending an alert twice for the same pipeline run.
@sensor(pipeline_name="failure_alert_pipeline", mode="prod")
def pipeline_failure_sensor(context):
runs = context.instance.get_runs(
filters=PipelineRunsFilter(
pipeline_name="your_pipeline_name",
statuses=[PipelineRunStatus.FAILURE],
),
)
for run in runs:
# Use the id of the failed run as run_key to avoid duplicate alerts.
yield RunRequest(run_key=str(run.run_id))
If you would like to set up success or failure handling policies on solids, you can find more information on the Solid Hooks page.
For pipelines that should initiate new runs for new paths in an s3 bucket, the dagster-aws
package provides the useful helper function get_s3_keys
.
Here is an example of a sensor that listens to a particular s3 bucket my_s3_bucket
:
from dagster_aws.s3.sensor import get_s3_keys
@sensor(pipeline_name="my_pipeline")
def my_s3_sensor(context):
new_s3_keys = get_s3_keys("my_s3_bucket", since_key=context.last_run_key)
if not new_s3_keys:
yield SkipReason("No new s3 files found for bucket my_s3_bucket.")
return
for s3_key in new_s3_keys:
yield RunRequest(run_key=s3_key, run_config={})