|
1 | | -# Python Example |
2 | | - |
3 | | -## Requirement |
4 | | -- This example works with Python >= 3.7 |
5 | | -- Install websocket client via `pip install websocket-client` |
6 | | -- Install python-dispatch via `pip install python-dispatch` |
7 | | - |
8 | | -## Before you start |
9 | | - |
10 | | -To run the existing example you will need to do a few things. |
11 | | - |
12 | | -1. You will need an EMOTIV headset. You can purchase a headset in our [online |
13 | | - store](https://www.emotiv.com/) |
14 | | -2. Next, [download and install](https://www.emotiv.com/developer/) the Cortex |
15 | | - service. Please note that currently, the Cortex service is only available |
16 | | - for Windows and macOS. |
17 | | -3. We have updated our Terms of Use, Privacy Policy and EULA to comply with |
18 | | - GDPR. Please login via the EMOTIV Launcher to read and accept our latest policies |
19 | | - in order to proceed using the following examples. |
20 | | -4. Next, to get a client id and a client secret, you must connect to your |
21 | | - Emotiv account on |
22 | | - [emotiv.com](https://www.emotiv.com/my-account/cortex-apps/) and create a |
23 | | - Cortex app. If you don't have a EmotivID, you can [register |
24 | | - here](https://id.emotivcloud.com/eoidc/account/registration/). |
25 | | -5. Then, if you have not already, you will need to login with your Emotiv id in |
26 | | - the EMOTIV Launcher. |
27 | | -6. Finally, the first time you run these examples, you also need to authorize |
28 | | - them in the EMOTIV Launcher. |
29 | | - |
30 | | -This code is purely an example of how to work with Cortex. We strongly |
31 | | -recommend adjusting the code to your purposes. |
32 | | - |
33 | | -## Cortex Library |
34 | | -- [`cortex.py`](./cortex.py) - the wrapper lib around EMOTIV Cortex API. |
35 | | - |
36 | | -## Susbcribe Data |
37 | | -- [`sub_data.py`](./sub_data.py) shows data streaming from Cortex: EEG, motion, band power and Performance Metrics. |
38 | | -- For more details https://emotiv.gitbook.io/cortex-api/data-subscription |
39 | | - |
40 | | -## BCI |
41 | | -- [`mental_command_train.py`](./mental_command_train.py) shows Mental Command training. |
42 | | -- [`facial_expression_train.py`](./facial_expression_train.py) shows facial expression training. |
43 | | -- For more details https://emotiv.gitbook.io/cortex-api/bci |
44 | | - |
45 | | -## Advanced BCI |
46 | | -- [`live_advance.py`](./live_advance.py) shows the ability to get and set sensitivity of mental command action in live mode. |
47 | | -- For more details https://emotiv.gitbook.io/cortex-api/advanced-bci |
48 | | - |
49 | | -## Create record and export to file |
50 | | -- [`record.py`](./record.py) shows how to create record and export data to CSV or EDF format. |
51 | | -- For more details https://emotiv.gitbook.io/cortex-api/records |
52 | | - |
53 | | -## Inject marker while recording |
54 | | -- [`marker.py`](./marker.py) shows how to inject marker during a recording. |
55 | | -- For more details https://emotiv.gitbook.io/cortex-api/markers |
| 1 | + |
| 2 | +# Emotiv Cortex API Python Examples |
| 3 | + |
| 4 | +This repository provides a set of Python examples to help you get started with the [Emotiv Cortex API](https://emotiv.gitbook.io/cortex-api). Each script demonstrates a specific workflow, making it easier to understand and integrate Cortex API features into your own projects. |
| 5 | + |
| 6 | + |
| 7 | +## Requirements |
| 8 | + |
| 9 | +- Python 2.7+ or Python 3.4+ |
| 10 | +- Install dependencies: |
| 11 | + - `pip install websocket-client` |
| 12 | + - `pip install python-dispatch` |
| 13 | + |
| 14 | + |
| 15 | +## Getting Started |
| 16 | + |
| 17 | +Before running the examples, please ensure you have completed the following steps: |
| 18 | + |
| 19 | +1. **Download and Install EMOTIV Launcher**: Download from [here](https://www.emotiv.com/products/emotiv-launcher). Log in with your Emotiv ID and accept the latest Terms of Use, Privacy Policy, and EULA in the Launcher. |
| 20 | +2. **Accept Policies**: If prompted, accept any additional policies in the EMOTIV Launcher. |
| 21 | +3. **Obtain an EMOTIV Headset or Create a Virtual Device**: |
| 22 | + - Purchase a headset from the [EMOTIV online store](https://www.emotiv.com/), **or** |
| 23 | + - Use a virtual headset in the EMOTIV Launcher by following [these instructions](https://emotiv.gitbook.io/emotiv-launcher/devices-setting-up-virtual-brainwear-r/creating-a-virtual-brainwear-device). |
| 24 | +4. **Get Client ID & Secret**: Log in to your Emotiv account at [emotiv.com](https://www.emotiv.com/my-account/cortex-apps/) and create a Cortex app. [Register here](https://id.emotivcloud.com/eoidc/account/registration/) if you don't have an account. |
| 25 | +5. **Authorize Examples**: The first time you run these examples, you may need to grant permission for your application to work with Emotiv Cortex. |
| 26 | + |
| 27 | +--- |
| 28 | + |
| 29 | +## Example Scripts Overview |
| 30 | + |
| 31 | +### 1. `cortex.py` — Cortex API Wrapper |
| 32 | +Central wrapper class for the Cortex API. Handles: |
| 33 | +- Opening and managing the websocket connection |
| 34 | +- Buidling JSON-RPC requests |
| 35 | +- Handling responses, errors, and emitting events to corresponding classes |
| 36 | +- Parsing and dispatching data to workflow scripts |
| 37 | + |
| 38 | +### 2. `sub_data.py` — Subscribe to Data Streams |
| 39 | +Demonstrates how to: |
| 40 | +- Subscribe to data streams (EEG, motion, performance metrics, etc.) |
| 41 | +- Print or process incoming data |
| 42 | +See: [Data Subscription](https://emotiv.gitbook.io/cortex-api/data-subscription) |
| 43 | + |
| 44 | +### 3. `record.py` — Record and Export Data |
| 45 | +Demonstrates how to: |
| 46 | +- Create a new record |
| 47 | +- Stop a record |
| 48 | +- Export recorded data to CSV or EDF |
| 49 | +See: [Records](https://emotiv.gitbook.io/cortex-api/records) |
| 50 | + |
| 51 | +### 4. `marker.py` — Inject Markers |
| 52 | +Demonstrates how to: |
| 53 | +- Inject markers into a record during data collection |
| 54 | +- Export records with marker information |
| 55 | +See: [Markers](https://emotiv.gitbook.io/cortex-api/markers) |
| 56 | + |
| 57 | + |
| 58 | +### 5. `mental_command_train.py` — Mental Command Training |
| 59 | +Demonstrates how to: |
| 60 | +- Load or create a training profile |
| 61 | +- Train mental command actions (e.g., neutral, push, pull) |
| 62 | +See: [BCI](https://emotiv.gitbook.io/cortex-api/bci) |
| 63 | + |
| 64 | + |
| 65 | +### 6. `facial_expression_train.py` — Facial Expression Training |
| 66 | +Demonstrates how to: |
| 67 | +- Load or create a training profile |
| 68 | +- Train facial expression actions (e.g., neutral, surprise, smile) |
| 69 | +See: [BCI](https://emotiv.gitbook.io/cortex-api/bci) |
| 70 | + |
| 71 | +### 7. `live_advance.py` — Advanced Live Data & Sensitivity |
| 72 | +Demonstrates how to: |
| 73 | +- Load a trained profile |
| 74 | +- Subscribe to the 'com' stream for live mental command data |
| 75 | +- (Optionally) Subscribe to the 'fac' stream for live facial expression data |
| 76 | +- Get and set sensitivity for mental command actions in live mode |
| 77 | +See: [Advanced BCI](https://emotiv.gitbook.io/cortex-api/advanced-bci) |
| 78 | + |
| 79 | +--- |
| 80 | + |
| 81 | +## Tips |
| 82 | +- Each script is self-contained and demonstrates a specific workflow. |
| 83 | +- Adjust the code as needed for your own applications. |
| 84 | +- For more details, refer to the [official Cortex API documentation](https://emotiv.gitbook.io/cortex-api/). |
56 | 85 |
|
57 | 86 |
|
0 commit comments