A message unit test process would be helpful. It could have some requirements of:
- testing whether the agent responds with the proper events when a floor-sent event requires a response, like "getManifests" --> "publishManifests"
- test at least one utterance that should produce a Dialog Event response. It doesn't have to be a specific DialogEvent, since that can vary
- ensuring that all the required events are correctly created/parsed with all the required parameters based on the openfloor libraries.
- ensure that optional events if received, are correctly parsed with the required parameters.
- ensure that implementation-specific events don't cause anything to break as long as it is well-formed JSON.
The implementation could include a file with a list of agent endpoints and one example utterance. Then, a Python program could iterate through the file and test all the OFP events. A log is written with the results of each message. If the server agent fails or returns the wrong or ill-formed OFP message, the error is logged with an informative message. In addition, a summary file is produced summarizing where tests succeeded or failed, perhaps in csv format.
A message unit test process would be helpful. It could have some requirements of:
The implementation could include a file with a list of agent endpoints and one example utterance. Then, a Python program could iterate through the file and test all the OFP events. A log is written with the results of each message. If the server agent fails or returns the wrong or ill-formed OFP message, the error is logged with an informative message. In addition, a summary file is produced summarizing where tests succeeded or failed, perhaps in csv format.