Ecoinformatics site parent site of Partnership for Biodiversity Informatics site parent site of REAP - Home


 

 

 



Terrestrial_usecases

This is version 5. It is not the current version, and thus it cannot be edited.
[Back to current version]   [Restore this version]


Terrestrial usecases


Use Case Name: Detached Execution


Version:


Summary:

  • User would like the ability to begin a Kepler workflow with laptop, detach/disconnect, and reconnect at a later time to check on execution status. Notifications may optionally be sent when workflow execution completes (how notifications will be sent will likely be configured in the workflow).
  • Ability to launch from the command line in batch mode.
  • Ability to re-attach to running job to check its progress.
  • User would also like to check status, and stop workflow jobs via a web interface.


Preconditions:

  • Have a complete workflow assembled.
  • Distributed execution system has been built and running on an accessible server.
  • Server has all software that the workflow requires (eg R, Matlab, etc)
  • Other users are not currently connected to workflow with Write access.


Triggers:

  • User launches job in batch mode. It will automatically detach at this point.
  • From gui, user selects "detach".


Basic course of events:

  • User designs workflow that includes notification system (eg uses Email Sender actor).
  • User identifies server(s) and establishes connection.
  • User executes workflow in either batch or gui mode (when workflow has graphical output, batch mode can handle this, eg ignore it, or write it out to eg pdfs)
  • When workflow completes it sends notification.
  • While job is running, user can reconnect to server and check status, alter settings, stop job or retrieve results.


Alternative paths:

  • Instead of using Kepler, view progress on a webpage.
  • Ability to suspend workflow without detaching from it


Postconditions:
Business rules:
Notes:
Author and date:




Use Case Name: Smart Reruns
Version:
Preconditions:

User has designed a workflow that specifies:

  1. conditions under which a complete rerun of workflow should occur when user executes workflow (eg "if data sources I have marked have changed, a smart rerun cannot occur")
  2. which intermediary derived data products to save for use by smart reruns


Summary:

Users want the ability to rerun certain workflows in a "smart" way, that is, without regenerating intermediary derived data products if the source data (and hence the derived data for non-stochastic models) have not changed. Some intermediate derived data products take a very long time to produce, hence it is beneficial to skip their generation when possible.
Triggers:

User executes a workflow that uses Smart Rerun features and all Smart Rerun criteria have been met.


Basic course of events:

Workflow developer designs a workflow that makes use of Smart Rerun components.

Upon execution of this workflow, Smart Rerun components check to see if their input data sources have changed (possibly the user can also specify conditions which must also be met). If these data sources have changed (or are being used for the first time), components (re)produce derived data products and save those that are marked as Smart Rerun derived data products. If data sources have not changed, and previously saved derived data products exist, a Smart Rerun can occur reusing these existing products. Special considerations for stochastic systems must be taken into account in the design of this usecase (do we allow Smart Reruns for certain stochastic models?).


Alternative paths:

  • It is possible a user would also like the ability to do a Smart Rerun even if Smart Rerun criteria are not met, ie they might want to ignore the fact that data sources have changed and reuse existing derived data products to test something else.


Postconditions:

Intermediary derived data products produced by Smart Rerun actors are saved.


Business rules:
Notes:
Author and date:




Use Case Name: Optional archiving of intermediate derived datasets.


Version:
Summary:

Users want to be able to save intermediate derived datasets.


Preconditions:
Triggers:
Basic course of events:
Alternative paths:
Postconditions:
Business rules:
Notes: This usecase can be seen as required by the "Smart Rerun", "Manage a group of model runs", and "Archive a complete end to end run" usecases
Author and date:




Use Case Name: Archive a complete end to end run ("publication ready archive")
Version:
Summary:

Users would like the ability to create a "publication ready archive"--an archive that contains the Kepler workflow, all required inputs (data sources like database tables are included), and all produced output. For example, at the completion of a study, a user can produce this archive and submit it (or a link to it) alongside their paper to peers and journals.


Preconditions:

  • User has created workflow that contains everything needed (eg data sources).
  • User documents any external software the workflow requires


Triggers:

User exports workflow


Basic course of events:

User creates a workflow. User turns on "publication ready archive" mode. User runs workflow, and exports to an archive at completion.


Alternative paths:

User uses features from "Smart Rerun", "Manage a group of model runs", and "save intermediate derived data products" usecase (ie a special "publication ready archive" mode might not be required).


Postconditions:


Business rules:
Notes:

A specific "publication ready archive" option would not be required if requirements for usecases mentioned in Alt Paths are implemented.
Author and date:




Use Case Name: Actor specific annotations (gui linked)
Version:
Summary:

Users would like to be able to produce annotations that are visually closely coupled with actors. When an actor is dragged around the campus, if it has an associated annotation, this annotation moves alongside it. This does not require a graphical link (eg line or box) that groups the two, but this could be done. Another way to approach this would be mouseover "alt text" popups.


Preconditions:

A component has been created with which to associate an annotation.


Triggers:
Basic course of events:

User creates a component and chooses to create an associated annotation for it.


Alternative paths:
Postconditions:
Business rules:
Notes:
Author and date:




Use Case Name: Actor for Data-Merge for specific product
Version:
Summary:

Users want the ability to easily merge many different data sources into one. For example, in the reap terrestrial case study uses might want to merge a) data in differently formatted excel files and text files b) near-realtime data streams and c) data pulled from an archive. Users want this process to convert units if needs be.

Merge data, automatically convert units


Preconditions:

An instance of data-merge component has been created w/ inputs connected to desired data sources.


Triggers:

User executes workflow.


Basic course of events:

User creates an instance of data-merge component and connects its input ports to desired data sources. User specifies certain properties in the data-merge actor regarding how to merge data (eg how to convert units, if interpolation or decimation should occur, which differently names fields correspond, etc)


Alternative paths:

User merges data themselves


Postconditions:

Merged derived source data product is produced.


Business rules:
Notes:
Author and date:




Use Case Name: Integrate database actors
Version:
Summary: make database actors more consistent.

DatabaseQuery actor has different output ports than metacat database actors -- users would like these actors to be more consistent. Users would also like to be able to view a database and its tables with this integrated actor.


Preconditions:
Triggers:
Basic course of events:


Alternative paths:
Postconditions:
Business rules:
Notes:
Author and date:




Use Case Name: Database actor option: snapshot
Version:
Summary:

Snapshot vs. realtime query of database each time workflow is run

Users would like ability to use a database (or more likely simply some data from a table in a database) in two ways.

  1. User can download a snapshot of the database or data from database and in all consecutive runs operate on this data snapshot or
  2. Each time workflow is run, the (potentially dynamic) database is queried and the resulting response is used.


Preconditions:

  • User has setup a workflow that uses a database actor
  • User has configured database actor to either snapshot-mode or realtime mode.


Triggers:

Workflow containing database actor is run.


Basic course of events:


Alternative paths:
Postconditions:
Business rules:
Notes:
Author and date:




Use Case Name: Event Notification Monitoring system
Version:
Summary:

Check if certain triggers / thresholds have been hit, send out alerts if so.

Users would like this kind of system to typically run automatically and periodically on a cron or cron-like system that should be easy to setup.

Examples:

  • monitor rainfall thresholds
  • light monitoring, eg if ambient light < ground level light, send email.
  • animals outside a certain radius.
  • proximity of different animals, eg mating events


Preconditions:
Triggers:

A specified threshold or condition has been reached A certain type of error has occurred


Basic course of events:

  • User designs an event monitoring workflow and sets it to run indefinitely or periodically
  • Workflow continously or periodically runs. workflow checks if certain conditions are true.
  • If a condition is met, a notification is sent via channel specified in workflow (eg email)


Alternative paths:
Postconditions:
Business rules:
Notes:
Author and date:




Use Case Name: Manage a group of model runs
Version:
Summary:

Users would like to be able to design a workflow that allows for the management of running a group of model runs. This means:

  • allowing user to easily define multi-dimensional parameter sweep
  • allowing user to redefine the parameter sweep partway through execution
  • saving all important aspects of each run (a log of parameters used, changes made, data products produced (including graphs))
  • groups sets of products produced in an understandable way (eg legend of params used on each graph, listed in metadata of products produced)
  • user might like ability to specify which intermediate data products are saved.
  • ability to read parameter space from a file that user has configured
  • allow for "irregular sweeps"
  • allow random parameters

This might be accomplished with a parameter sweep actor.


Preconditions:
Triggers:
Basic course of events:

User develops workflow that includes a parameter sweep component. parameter sweep actor's input port(s) are connected to parameters or parameter source(s).
Alternative paths:

User designs a workflow that uses and changes parameter components.


Postconditions:
Business rules:
Notes:
Author and date:




Use Case Name: Instruction set for wrapping external program
Version:
Summary:

Users would like to easily wrap an external program in Kepler. By "wrap" we mean create a simple workflow that consists of being able to run an external program that has already been developed, eg a C or fortran program.

Users would like either or both of the following:

  1. an instruction set / tutorial for easily "wrapping" an external program into a workflow in Kepler
  2. a wizard inside Kepler that takes them through the steps of producing this workflow.


Preconditions:

Working external program has already been written.


Triggers:

User runs wizard or user opens tutorial.


Basic course of events:

User executes wizard inside Kepler, points it at their external program, configures any required settings via the wizard.


Alternative paths:

User reads tutorial that explains how to "wrap" external programs


Postconditions:
Business rules:
Notes:
Author and date:




Use Case Name: Easily create and share workflows and actors
Version:
Summary:

Users want an easy way to create and share workflows with others. This means making sure exporting, emailing, and importing workflows is painless, or the submission of workflows/actors to a repository that others can access is painless.


Preconditions:
Triggers:
Basic course of events:

  1. User1 creates component or workflow.
  2. User1 exports or publishes component of workflow.
  3. User2 imports component or workflow.


Alternative paths:
Postconditions:
Business rules:
Notes:
Author and date:




Use Case Name: facilitate peer review process


Version:
Summary:
Preconditions:
Triggers:
Basic course of events:
Alternative paths:
Postconditions:
Business rules:
Notes:
Author and date:




Use Case Name: use Kepler as a teaching tool
Version:
Summary:

Users would like to use Kepler as a teaching tool. This means adding features such as:

  • ability to "lock" certain components away from students
  • improving Kepler's animation (eg show parameters as they change) so live demos are more expressive
  • improving Kepler's error messages.


Preconditions:
Triggers:
Basic course of events:
Alternative paths:
Postconditions:
Business rules:
Notes:
Author and date:




Use Case Name: Ecological Actors
Version:
Summary:

Users would like a set of components to do common ecological functions (ie mean or sum...) over temporal and spatial scales


Preconditions:

If ecologists want to create some of these: improvements to workflow and component submission to repository process have been made.


Triggers:
Basic course of events:

User makes use of new ecological components in library.


Alternative paths:
Postconditions:
Business rules:
Notes:
Author and date:




Use Case Name: FakeData Generator Actors
Version:
Summary:

Users would like a set of FakeData Generator Actors appropriate to the REAP Case Studies. A FakeData actor should produce data in the same way a real data source would. For example, a "FakeData_CR800_Temperature" Actor might produce a datastream of temperature data at 1 "sample" per minute. The actual data in the datastream should be indecipherable from a real temperature datastream coming out of a Campbell CR800 datalogger deployed in the field. Somewhere in the metadata however, a FakeData Generator Actor should clearly identify itself (as being fake).


Preconditions:
Triggers:
Basic course of events:

  • User drags actor to canvas like any other data source.
  • User configures actor to specify it as a certain type of data source (eg Campbell temperature stream) with sampling characteristics.


Alternative paths:
Postconditions:
Business rules:
Notes:
Author and date:




Use Case Name: 3d plots
Version:
Summary:

Users would like to be able to produce 3d plots with Kepler. (ggobi, gnuplot, java3d ?)


Preconditions:
Triggers:
Basic course of events:
Alternative paths:
Postconditions:
Business rules:
Notes:
Author and date:




Use Case Name: iterator wrapper actor
Version:
Summary:

Users would like an Iterator Actor that iterates a workflow, or parts of a workflow in a smart way. See Paramater Sweep Usecase.


Preconditions:
Triggers:
Basic course of events:
Alternative paths:
Postconditions:
Business rules:
Notes:
Author and date:




Use Case Name: markov chain monte carlo (mcmc)
Version:
Summary:

Allow this different technique for running the model. Can not use if model is stochastic.


Preconditions:
Triggers:
Basic course of events:
Alternative paths:
Postconditions:
Business rules:
Notes: (see winbugs and R2WinBUGS)
Author and date:




Use Case Name: breakpoints
Version:
Summary:

Users would like to be able to add breakpoints in a workflow. When a breakpoint is reached, workflow execution is paused or stopped. Users would also like breakpoints to be conditional, eg a breakpoint might only pause or stop execution once a certain condition (eg equilibrium) has been reached.


Preconditions:
Triggers:
Basic course of events:
Alternative paths:
Postconditions:
Business rules:
Notes:
Author and date:




Use Case Name: debugging
Version:
Summary:

Users would like an easy way to debug workflows. Some ideas on this: a special debug output port on all actors, a specific debug actor (most workflow developers make extensive use of the display actor to verify output after each step as they build workflows--new users might not think of this, an actor named or tagged Debug might be of use)


Preconditions:
Triggers:
Basic course of events:
Alternative paths:
Postconditions:
Business rules:
Notes:
Author and date:



Go to top   More info...   Attach file...
This particular version was published on 31-Jul-2007 09:42:06 PDT by uid=barseghian,o=NCEAS.