<p><h3>Currently being developed by Anubhav Jana, IITB</h3></p>
<h4>This serverless FaaS platform supports individual function registrations, DAG registrations, Trigger registrations associated with DAGs/functons. This platform also supports various DAG primitives which is provided in this document for reference.</h4>
<h2> Guide: Register a Function </h2>
<h4> This section will guide you how to register a function. The following pre-requites are to be fulfilled before you register a function </h4>
* DockerFile - based on which the image will be build to run your function
* Python file - application logic to run the action/function (Here, in this example this is "test.py")
* requirements.txt - add all dependant pip packages in this file. In case you dont have any library dependancies,submit a blank requirements.txt
<h5> You must have the above 3 files before you register the function </h5>
<h4> Following is the sample code <u>register_function.py</u> to register a function. This will create a new function named "testaction" and register it onto our function store handled by us. The url endpoint is: /regster/function/function_name</h4>
<h4> This section will guide you how to register a DAG. The following pre-requites are to be fulfilled before you register a DAG </h4>
* dag.json - a JSON specification file to define the DAG. Accepted DAG Format and a sample example is provided in this readme file itself.
<h4> Following is the sample code <u>dag_register.py</u> to register a DAG. This will register a new DAG onto our DAG store handled by us. The url endpoint is: /regster/dag</h4>
<h4> Following is the sample code <u>trigger_register.py</u> to register a trigger. This will register a new trigger onto our Trigger store handled by us. The url endpoint is: /regster/trigger</h4>
DAG specification includes both control dependancy as well as the control dependancy
<h4> DAG Fields </h4>
* "name" : Name of the DAG
* "node_id": Name of the function/action
* "node_id_label": Name you want to give to the node
* "primitive": Type of primitive the action supports - condition,parallel,serial(sequential)
* "condition": If primitive type is "condition", then you should provide the following fields "source", "operator" and "target", else you should leave it as ""
* "source": Specify any one of the response keys of the current node_id. For e.g. if one of the keys in response json is "result", and you want to provide a condition that if result=="even", then specify "source" as "result" and "target" as "even"
* "operator": Mathematical operations like "equals", "greater_than" , "less_than", "greater_than_equals", "less_than_equals" are accepted.
* "target": Specify the target value. It can accept both integer and string.
* "next": Specify the name of next node_id to be executed. If primitive = "parallel", "next" will take list of node_ids, else it will accept a single node_id in "<nodeid>" format. If this is the last node_id(ending node of the workflow), keep it as "".
* "branch_1": Specify node_id if primitive == condition else keep "". This is the target branch which will execute if condition is true
* "branch_2": Specify node_id if primitive == condition else keep "". This is the alternate branch which will execute if condition is false
* "arguments": Keep it blank for each node_id. It will get populated with json when the DAG is instantiated with the trigger
* "outputs_from": Specify the list of node_id/node_ids whose output current node_id needs to consume. This is for data dependancy.
<p><h4>Suppose you want to merge outputs from two actions action_1 and action_2 in your action_3, then you must include the following lines in your action_3 to process incoming inputs from action_1 and action_2</p></h4>. This is applicable for merging primitive as well as handling output from multiple actions.
* "key_action_1" refers to a key from action_1 response which you want to use in action_3
* "key_action_2" refers to a key from action_2 response which you want to use in action_3