wiki.ziemers.de

ziemer's informatik Wiki

Benutzer-Werkzeuge

Webseiten-Werkzeuge


wiki:software:beuthbot:deconcentrator-js

Unterschiede

Hier werden die Unterschiede zwischen zwei Versionen angezeigt.

Link zu dieser Vergleichsansicht

Beide Seiten der vorigen Revision Vorhergehende Überarbeitung
Nächste Überarbeitung
Vorhergehende Überarbeitung
wiki:software:beuthbot:deconcentrator-js [22.07.2020 18:43]
Lukas Danckwerth
wiki:software:beuthbot:deconcentrator-js [22.07.2020 19:20] (aktuell)
Lukas Danckwerth [Requirements Analysis]
Zeile 1: Zeile 1:
 ===== Deconcentrator-JS ===== ===== Deconcentrator-JS =====
  
-deconcentrator-js +[[https://raw.githubusercontent.com/beuthbot/deconcentrator-js|{{https://raw.githubusercontent.com/beuthbot/deconcentrator-js/master/.documentation/DeconcentratorJSLogo100.png}}]]
- +
-![Icon](.documentation/DeconcentratorJSLogo100.png "Icon")+
  
 > BeuthBot deconcentrator written in JavaScript > BeuthBot deconcentrator written in JavaScript
  
-## Feature+==== Feature ====
  
 The deconcentrator uses different NLU processors to compare their results and tries to choose an best fitting answer. The NLU processors like RASA must know their domain on their own. The deconcentrator simply compares the confidence score of the intents given from the processors and returns the intent with the highest score. The deconcentrator uses different NLU processors to compare their results and tries to choose an best fitting answer. The NLU processors like RASA must know their domain on their own. The deconcentrator simply compares the confidence score of the intents given from the processors and returns the intent with the highest score.
  
-## Getting Started+==== Getting Started ====
  
-#### Prerequisites+== Prerequisites ==
  
-* [node.js](https://nodejs.org/en/(Development) +  * [[https://nodejs.org/en/|node.js]] (Development) 
-* [Docker](https://www.docker.com(Development, Deployment) +  * [[https://www.docker.com|Docker]] (Development, Deployment) 
-* [docker-compose](https://docs.docker.com/compose/(Development, Deployment)+  * [[https://docs.docker.com/compose/|docker-compose]] (Development, Deployment)
  
-#### Clone Repository+== Clone Repository ==
  
-```shell script+<code>
 # clone project # clone project
 $ git clone https://github.com/beuthbot/deconcentrator-js.git $ git clone https://github.com/beuthbot/deconcentrator-js.git
Zeile 30: Zeile 28:
 # copy environment file and edit properly # copy environment file and edit properly
 $ cp .env.sample .env $ cp .env.sample .env
-```+</code>
  
-#### Install+== Install ==
  
 There are two different ways running the deconcentrator. First is with `Node.js`'s package manager (`npm`) which is probably the better idea for developing. The other way is to run the deconcentrator-js in a Docker container with `docker-compose`. There are two different ways running the deconcentrator. First is with `Node.js`'s package manager (`npm`) which is probably the better idea for developing. The other way is to run the deconcentrator-js in a Docker container with `docker-compose`.
  
-##### Install with npm+== Install with npm ==
 Using npm you simply type in the following command. Using npm you simply type in the following command.
-```shell script+<code>
 # install dependencies # install dependencies
 $ npm install $ npm install
Zeile 44: Zeile 42:
 # start running the deconcentrator at localhost:8338 # start running the deconcentrator at localhost:8338
 $ npm start run $ npm start run
-```+</code> 
 This will run the deconcentrator on it's default port `8338` and with the default RASA service url `http://localhost:5005/model/parse`. This will run the deconcentrator on it's default port `8338` and with the default RASA service url `http://localhost:5005/model/parse`.
  
-##### Install with docker-compose.yml+== Install with docker-compose.yml ==
  
 Using docker-compose is prossibly the easiest way of running the deconcentrator. Simply type Using docker-compose is prossibly the easiest way of running the deconcentrator. Simply type
-```shell script+<code>
 $ docker-compose up $ docker-compose up
-```+</code>
 to run a container with the deconcentrator. The docker-compose file also uses port `8338` as a default one. The endpoint of RASA is taken from the `.env`. Make sure to edit it to your needs. Have a look at the sample file `.env.sample` and the section [.env](#.env). to run a container with the deconcentrator. The docker-compose file also uses port `8338` as a default one. The endpoint of RASA is taken from the `.env`. Make sure to edit it to your needs. Have a look at the sample file `.env.sample` and the section [.env](#.env).
  
-## Overview+==== Overview ====
  
-### Structure+=== Structure ===
  
-Location             About                                                 +Location             About                                                 ^
-| -------------------- | ----------------------------------------------------- |+
 | `.documentation/   | Contains documentation, UML and icon files.           | | `.documentation/   | Contains documentation, UML and icon files.           |
 | `model/            | Contains the processors and the `processor-queue.js`. | | `model/            | Contains the processors and the `processor-queue.js`. |
Zeile 73: Zeile 71:
  
  
-### Functionality+=== Functionality ===
  
-![function](http://www.plantuml.com/plantuml/proxy?cache=no&src=https://raw.githubusercontent.com/beuthbot/deconcentrator-js/master/.documentation/uml/function.txt) 
  
-#### deconcentrator.js+<uml> 
 +@startuml 
 + 
 +participant "gateway" as GW 
 + 
 +box "deconcentrator-js" #LightBlue 
 +participant "deconcentrator.js" as DC 
 +participant "processor-queue.js" as PQ 
 +participant "rasa-processor.js" as RP 
 +participant "PROC_1.js" as P1 
 +participant "PROC_2.js" as P2 
 +end box 
 + 
 +GW -> DC: request\nwith message 
 +activate DC 
 +DC -> DC: create and fill queue 
 +DC -> PQ: run 
 +activate PQ 
 +PQ -> RP: (async) request 
 +activate RP 
 +PQ -> P1: (async) request 
 +activate P1 
 +RP -> PQ: interpretation 
 +deactivate RP 
 +PQ -> P2: (async) request 
 +activate P2 
 +P1 -> PQ: interpretation 
 +deactivate P1 
 +P2 -> PQ: interpretation 
 +deactivate P2 
 +PQ -> DC: all\ninterpretations 
 +deactivate PQ 
 +DC -> DC: filter out\nbest intent 
 +DC -> GW: response\nwith intent 
 +deactivate DC 
 + 
 +@enduml 
 +</uml> 
 + 
 + 
 +== deconcentrator.js ==
  
 Uses an express application to listen for incoming messages. For an incoming message it then creates a processor-queue. The processor to can be specified with the `processors` property of an message. See [Request Schema - `Message`](Request-Schema ---Message) for more information. After all processor are done with interpretation the result with a confidence score which is too low are filtered out. Uses an express application to listen for incoming messages. For an incoming message it then creates a processor-queue. The processor to can be specified with the `processors` property of an message. See [Request Schema - `Message`](Request-Schema ---Message) for more information. After all processor are done with interpretation the result with a confidence score which is too low are filtered out.
  
-#### processor-queue.js+== processor-queue.js ==
  
 For every incoming message the deconcentrator creates a new `ProcessorQueue` (defined in `processor-queue.js`) and adds all available processors to it. When calling the `.interpretate(message)` function of the queue it starts requesting the processors for an interpretation. The number of asynchronous requests can be set with the `numOfSynchronProcessors` property of the queue. For every incoming message the deconcentrator creates a new `ProcessorQueue` (defined in `processor-queue.js`) and adds all available processors to it. When calling the `.interpretate(message)` function of the queue it starts requesting the processors for an interpretation. The number of asynchronous requests can be set with the `numOfSynchronProcessors` property of the queue.
  
-#### processor.js+== processor.js ==
  
 Defines the interface of a NLU processor. Defines the interface of a NLU processor.
  
-#### Implemented Processors +== Implemented Processors == 
-* [rasa-processor.js](model/rasa-processor.js)+  * [[https://raw.githubusercontent.com/beuthbot/deconcentrator-js/master/model/rasa-processor.js|rasa-processor.js]] 
 +  * ...
  
-* ...+=== Add new NLU processor ===
  
-   +== Setp 1: ==
- +
-### Add new NLU processor +
- +
-#### Setp 1: +
 Create a new NLU processor service. Create a new NLU processor service.
  
-#### Step 2:+== Step 2: ==
 Create a new `PROCESSOR_NAME-processor.js` file in the `model` directory of the project. Create a new `PROCESSOR_NAME-processor.js` file in the `model` directory of the project.
  
-#### Step 3:+== Step 3: ==
  
 Implement the `name` property and the `interpretate` function. Make sure the response looks like the one from rasa or the demo processor. Implement the `name` property and the `interpretate` function. Make sure the response looks like the one from rasa or the demo processor.
  
-#### Step 4:+== Step 4: ==
  
 Add the name of the processor to the `default_processors` in the `deconcentrator.js` file. Add the name of the processor to the `default_processors` in the `deconcentrator.js` file.
  
-## API+==== API ====
  
 The following lists the resources that can be requested with the deconcentrator API. The following lists the resources that can be requested with the deconcentrator API.
  
-```http+<code>
 GET   http://localhost:8338 GET   http://localhost:8338
-```+</code>
  
 Returns a live sign of the deconcentrator. Returns a live sign of the deconcentrator.
  
-```http+<code>
 POST  http://localhost:8338/messages POST  http://localhost:8338/messages
-```+</code>
  
-### Request Schema - `Message`+=== Request Schema - `Message` ===
  
-```json+<code>
 { {
   "text": "Wie wird das Wetter morgen?",   "text": "Wie wird das Wetter morgen?",
Zeile 134: Zeile 168:
   "processors": ["rasa"]   "processors": ["rasa"]
 } }
-```+</code>
  
 Whereas the specification of the `min_confidence_score` and the`processors` is optional. If not minimum confidence score is given a default one is used (by now this is `0.8`). For now there is only the usage of RASA implemented so there is no effect of specifying the `processors` property. Whereas the specification of the `min_confidence_score` and the`processors` is optional. If not minimum confidence score is given a default one is used (by now this is `0.8`). For now there is only the usage of RASA implemented so there is no effect of specifying the `processors` property.
  
-#### Class Diagramm+== Class Diagramm ==
  
-![alternative text](http://www.plantuml.com/plantuml/proxy?cache=no&src=https://raw.githubusercontent.com/beuthbot/deconcentrator-js/master/.documentation/uml/message.txt)+<uml
 +@startuml
  
-### Response Schema - `Answer`+class Message { 
 +  text: String 
 +  min_confidence_score: Float 
 +  processors: Array<String> 
 +
 + 
 +@enduml 
 +</uml> 
 + 
 +=== Response Schema - `Answer` ===
 The response for a successfully processed request to the deconcentrator contains the following information. The response for a successfully processed request to the deconcentrator contains the following information.
-```json+<code>
 { {
   "intent": {   "intent": {
Zeile 174: Zeile 218:
   "text": "Wie wird das Wetter morgen?"   "text": "Wie wird das Wetter morgen?"
 } }
-```+</code>
  
 The response for a unsuccessfully processed request to the deconcentrator or when an error occures contains the following information. The response for a unsuccessfully processed request to the deconcentrator or when an error occures contains the following information.
-```json+<code>
 { {
   "error": "The given message can't be interpretated.",   "error": "The given message can't be interpretated.",
   "text": "Wie wird das Wetter morgen?"   "text": "Wie wird das Wetter morgen?"
 } }
-```+</code>
  
-#### Class Diagramm+== Class Diagramm ==
  
-![alternative text](http://www.plantuml.com/plantuml/proxy?cache=no&src=https://raw.githubusercontent.com/beuthbot/deconcentrator-js/master/.documentation/uml/answer.txt)+<uml
 +@startuml
  
-## Implemented and connected NLU processors+class Answer { 
 +  text: String 
 +  intent: Intent 
 +  entities: Array<Entity> 
 +  error: String 
 +}
  
-| Provider |BeuthBot Project | Processor File | +class Intent { 
-| -------- |---------------- | -------------- | +  name: String 
-| [RASA](https://rasa.com/docs/rasa/) | [rasa](https://github.com/beuthbot/rasa) | [rasa-processor.js](https://github.com/beuthbot/deconcentrator-js/blob/master/model/rasa-processor.js) |+  confidenceFloat 
 +}
  
-### More NLU processors candidates+class Entity { 
 +  start: Int 
 +  end: Int 
 +  text: String 
 +  value: String 
 +  confidence: Float 
 +  additional_info: AdditionalInfo 
 +  entity: String 
 +}
  
-- [Microsoft LUIS](https://azure.microsoft.com/de-de/services/cognitive-services/language-understanding-intelligent-service/) +class AdditionalInfo { 
-- [Google Cloud NLU](https://cloud.google.com/natural-language/) +  valueString 
-- [IBM Watson NLU](https://www.ibm.com/watson/services/natural-language-understanding/)+  grainString 
 +  typeString 
 +  values: Dictionary<String, Any> 
 +}
  
-## .env+Answer *--- Intent 
 +Answer *--- Entity 
 +Entity *--- AdditionalInfo 
 + 
 +@enduml 
 +</uml> 
 + 
 +==== Implemented and connected NLU processors ==== 
 + 
 +^ Provider ^BeuthBot Project ^ Processor File ^ 
 +| [[https://rasa.com/docs/rasa|RASA]] | [[https://github.com/beuthbot/rasa|rasa]] | [[https://github.com/beuthbot/deconcentrator-js/blob/master/model/rasa-processor.js|rasa-processor.js]] | 
 + 
 +=== More NLU processors candidates === 
 + 
 +  * [[https://azure.microsoft.com/de-de/services/cognitive-services/language-understanding-intelligent-service/|Microsoft LUIS]] 
 +  * [[https://cloud.google.com/natural-language/|Google Cloud NLU]] 
 +  * [[https://www.ibm.com/watson/services/natural-language-understanding/|IBM Watson NLU]] 
 + 
 +==== .env ====
  
 With the `.env` file the deconcentrator can be configured. The following demonstrates a sample file. The same content can be found in the`.env.sample` file of the project. With the `.env` file the deconcentrator can be configured. The following demonstrates a sample file. The same content can be found in the`.env.sample` file of the project.
  
-```dotenv+<code>
 RASA_ENDPOINT=http://0.0.0.0:5005/model/parse RASA_ENDPOINT=http://0.0.0.0:5005/model/parse
  
 # Optional # Optional
 MIN_CONFIDENCE_SCORE=0.85 MIN_CONFIDENCE_SCORE=0.85
-```+</code> 
 + 
  
 +==== Requirements Analysis ====
  
 +  * [x] ''/DCF100/'' The deconcentrator responds to incoming POST requests by delegating the message to a collection of NLU processor which try to interpretate the given message
 +  * [x] ''/DCF101/'' The deconcentrator accepts incoming messages as defined via the Request Schema
 +  * [x] ''/DCF102/'' The deconcentrator sends answers as defined via the Response Schema
 +  * [x] ''/DCF103/'' The deconcentrator answers with proper messages for occuring errors
 +  * [x] ''/DCF104/'' New NLU processors muss be easy to integrate
 +  * [x] ''/DCF105/'' The deconcentrator has a default value for the minimum confidence score
 +  * [x] ''/DCF106/'' The deconcentrator has a default value for the list of processors
 +  * [x] ''/DCF107/'' The minimum confidence score can be set globally within the Dockerfile
 +  * [ ] ''/DCF108/'' The list of processors to be used can be set globally within the Dockerfile
  
-## Requirements Analysis 
  
-* [x] `/DCF100/` The deconcentrator responds to incoming POST requests by delegating the message to a collection of NLU processor which try to interpretate the given message 
-* [x] `/DCF101/` The deconcentrator accepts incoming messages as defined via the Request Schema 
-* [x] `/DCF102/` The deconcentrator sends answers as defined via the Response Schema 
-* [x] `/DCF103/` The deconcentrator answers with proper messages for occuring errors 
-* [x] `/DCF104/` New NLU processors muss be easy to integrate 
-* [x] `/DCF105/` The deconcentrator has a default value for the minimum confidence score 
-* [x] `/DCF106/` The deconcentrator has a default value for the list of processors 
-* [x] `/DCF107/` The minimum confidence score can be set globally within the Dockerfile 
-* [ ] `/DCF108/` The list of processors to be used can be set globally within the Dockerfile 
wiki/software/beuthbot/deconcentrator-js.1595436190.txt.gz · Zuletzt geändert: 22.07.2020 18:43 von Lukas Danckwerth