Skip to content

Commit fab931e

Browse files
author
Emil Gelfort
authored
Merge pull request #1 from SquareFactory/AR-98_better_docs
Ar 98 better docs
2 parents 23ced88 + 76e9690 commit fab931e

8 files changed

Lines changed: 152 additions & 17 deletions

File tree

CHANGELOG.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,12 @@ All notable changes to this project will be documented in this file.
55
The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
66

77

8+
## [0.4.2] - 2022.07.06
9+
10+
### Improvements
11+
- doc improvement for release
12+
- name change to avoid confusions
13+
814

915
## [0.4.1] - 2022.02.17
1016

README.md

Lines changed: 32 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,15 @@
1+
![Isquare deploy logo](docs/imgs/deploy_logo.png)
12
# Isquare client for Python
23

3-
This repository contains the client for [isquare](isquare.ai). It is available under the form of python classes, as well as a command-line-interface.
4+
This repository contains the official python client for [ISquare](isquare.ai) deploy. It is available under the form of python classes which are ready to use in your code, as well as a command-line-interface. We currently support inference with image, text & json files, as well as any numpy array or python dictionnary or string, both for input and output.
5+
6+
The complete documentation for ISquare can be found [here](docs.isquare.ai).
47

58
## Installation
69

710
### From pip
811

9-
TODO when public
12+
TODO when public.
1013

1114
### From source
1215

@@ -16,34 +19,55 @@ pip install --editable .
1619

1720
### Additional requirements
1821

19-
To be able to test your models, you need the following packages:
22+
To be able to test your model builds, you need the following packages:
2023
Docker >= 19.03.13
2124

2225
_Note_: If you only need the client for inference, this is not required.
2326

2427
## Usage
28+
The client can be used to verify your model build (e.g. checking if they will properly run on [ISquare](isquare.ai)) and to perform inference calls to your deployed models. To use this client for inference, you need to have a model up and running on [ISquare](isquare.ai).
2529

2630
Commands and their usage are described [here](docs/commands.md).
2731

28-
Guidelines on the code adaptation required to deploy a model on isquare.ai can be found [here](docs/isquare_tutorial.md)
32+
End-to-end guidelines on the code adaptation required to deploy a model on isquare.ai can be found [here](docs/isquare_tutorial.md).
2933

3034
## Examples
3135

32-
- Build your i2 compatible docker image:
36+
### Command line interface
3337

38+
#### Test if your model repository is Isquare-compatible
39+
To verify if your code will run smoothly on [ISquare](isquare.ai), you can perform a local build & unit test. This will build a container image with all your specific dependencies and perform an inference test. We've included an example of a simple computer vision model which returns the mirrored image it is given, and it can be tested by running:
3440

3541
```bash
36-
i2 build examples/tasks/mirror.py
42+
i2py build examples/tasks/mirror.py
3743
```
44+
When you deploy a model with [ISquare](isquare.ai), you will be provided a url for the model, and requested to create access keys. Using a valid url & access keys (the one displayed are an example), you can perform an inference with an Image model (e.g. the Mirror) and a `.png` image by running:
3845

39-
Simple inference:
4046

4147
```bash
42-
i2 infer \
48+
i2py infer \
4349
--url wss://archipel-beta1.isquare.ai/43465956-8d6f-492f-ad45-91da69da44d0 \
4450
--access_uuid 48c1d60a-60fd-4643-90e4-cd0187b4fd9d \
4551
examples/test.png
4652
```
4753
Other examples can be found [here](docs/getting_started.md).
4854

55+
### Using a model inside your python code
56+
As you probably want to automate your model calls by integrating them directly into your code, we've provided you with several python classes you can directly use in your code. The main class to use for that is the `I2Client` class. A simple inference can be performed as follows:
57+
58+
```python
59+
from i2_client import I2Client
60+
import cv2
61+
62+
# You need your url, access key and an image
63+
url = "wss://archipel-beta1.isquare.ai/43465956-8d6f-492f-ad45-91da69da44d0"
64+
access_key = "472f9457-072c-4a1a-800b-75ecdd6041e1"
65+
img = cv2.imread("test.jpg")
66+
67+
# Initialize the client & perform inference
68+
inference_client = I2Client(url,access_key)
69+
success, output = inference_client.inference(img)[0]
70+
```
71+
72+
A more complex example, showing how to stream a camera to your model, can be found [here](examples/webcam_stream.py).
4973

docs/commands.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,9 @@
11
# i2
22

3-
`i2`, for isquare, is the name of the general command used for the client:
3+
`i2py`, for isquare python client, is the name of the general command used for the client:
44

55
```bash
6-
Usage: i2 [OPTIONS] COMMAND [ARGS]...
6+
Usage: i2py [OPTIONS] COMMAND [ARGS]...
77

88
Command line interface for isquare.
99

@@ -19,7 +19,7 @@ Commands:
1919
## build
2020

2121
```bash
22-
Usage: i2 build [OPTIONS] SCRIPT
22+
Usage: i2py build [OPTIONS] SCRIPT
2323

2424
Build an docker image ready for isquare.
2525

@@ -46,7 +46,7 @@ If you just want to test an image without rebuilding it completely you can just
4646
following command:
4747

4848
```bash
49-
Usage: i2 test [OPTIONS] TAG
49+
Usage: i2py test [OPTIONS] TAG
5050

5151
Verify that an docker image matches the isquare standard.
5252

@@ -57,10 +57,10 @@ Options:
5757

5858
## infer
5959

60-
The `i2 infer` command is used to send the data to your models running on isquare:
60+
The `i2py infer` command is used to send the data to your models running on isquare:
6161

6262
```bash
63-
Usage: i2 infer [OPTIONS] DATA
63+
Usage: i2py infer [OPTIONS] DATA
6464

6565
Send data for inference.
6666

docs/getting_started.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ The client allows you to build and test your model before uploading it to isquar
2222
you to test this feature, which we are sure will save you alot of time. For instance, try running:
2323

2424
```bash
25-
i2 build examples/tasks/mirror.py --cpu
25+
i2py build examples/tasks/mirror.py --cpu
2626
```
2727
You should see following output:
2828

docs/imgs/deploy_logo.png

10.4 KB
Loading

docs/isquare_tutorial.md

Lines changed: 21 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -65,8 +65,13 @@ class ArchipelFacePixelizer(ImagesToImagesWorker):
6565
The imports specifies, along with any dependencies or additional functions or classes, which workertype we will be using. In our case, the model takes images as inputs and also return images (the same as before but the faces blurred), so we choose the `ImagesToImagesWorker`. The name of the workerclass always reflects inputs and outputs. For the moment, the following workers are available:
6666
- `ImagesToImagesWorker` (e.g. Face blurring)
6767
- `ImagesToDictsWorker` (e.g. classification or detection)
68+
- `ImagesToStringsWorker` (e.g. for image annotation)
6869
- `StringsToDictsWorker` (e.g. NLP model)
69-
More types will be added on the way. If the input outputs types you are looking for are not available, please let us know. In the meantime know thaht most data formats can be converted to strings!
70+
- `DictsToDictsWorker` (e.g. NLP model)
71+
- `StringsToImagesWorker` (e.g. Image generation from captions)
72+
- `DictsToImagesWorker` (e.g. Image generation from captions)
73+
- `DictsToStringsWorker` (e.g. NLP model)
74+
More types will be added on the way. If the input outputs types you are looking for are not available, please let us know by opening an issue. In the meantime know that most data formats can be converted to strings and dicts!
7075

7176
You can specify multiple classes and functions inside your workerscript (you can even write your whole code inside it, although we do not recommend it). The `__task_class_name__` specifies which class is your workerclass, in our case `ArchipelFacePixeliser`.
7277

@@ -205,6 +210,21 @@ self.log(threshold_value)
205210
This value is now logged, and you can retrieve it on your dashboard on isquare.ai. In this way, the parameters of the model can be adpated to fit the real life data, and the real performance of the model assessed.
206211
We highly encourage monitoring apropriate metrics for your model, in this way you'll always know how well your model is really performing.
207212

213+
## Advanced usage
214+
215+
### Defining an example input
216+
Isquare will automatically test your model by generating an input corresponding to your worker class, with one exception: All `DictsToXWorker` workers. It could also be that you want to test your worker with a very specific input. In either of those cases, you can override the `get_dump_input` method of your worker, for example, as follows:
217+
```python
218+
class MusicalTastePredictor(DictsToDictsWorker):
219+
"""Predicts musical taste from completely arbitrary information."""
220+
...
221+
...
222+
def get_dump_input(self):
223+
return {"Name":"John","Origin": "Switzerland","Age":18}
224+
225+
```
226+
227+
208228

209229

210230

examples/README.md

Lines changed: 84 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,84 @@
1+
# Examples
2+
This directory shows 3 sample integrations of the [ISquare](isquare.ai) client for image inference, with 3 levels of complexity:
3+
- How to perform inference with an image
4+
- How to perform inference with a video
5+
- How to stream a camera to your model
6+
7+
## Simple inference
8+
First, we'll look at how to perform a simple inference with an image file. To start, we need to import our libraries and initialize the client:
9+
```python
10+
from i2_client import I2Client
11+
import cv2
12+
import numpy as np
13+
14+
# You need your url, access key and an image
15+
url = "wss://archipel-beta1.isquare.ai/43465956-8d6f-492f-ad45-91da69da44d0"
16+
access_key = "472f9457-072c-4a1a-800b-75ecdd6041e1"
17+
18+
inference_client = I2Client(url,access_key)
19+
20+
```
21+
Then, we load the image using OpenCV and verify it is loaded correctly
22+
```python
23+
img = cv2.imread("test.jpg")
24+
if img is None:
25+
raise FileNotFoundError("invalid image")
26+
```
27+
Finally, we just have to call our model using the client. If using an image to image model, we can show the original and the saved image next to each other:
28+
```python
29+
success, output = inference_client.inference(img)[0]
30+
concatenate_imgs = np.concatenate((img, output), axis=1)
31+
cv2.imshow("original / inference ", concatenate_imgs)
32+
```
33+
And that's it for the simple usage of the client. Our client currently supports strings, numpy arrays, and any python dictionary objects, as long as they are numpy serialisable. If you have a sentiment analysis model for text, your inference could look like the following:
34+
35+
```python
36+
37+
success, output = inference_client.inference("It's a rainy summer day")
38+
```
39+
or, for a dictionary:
40+
```python
41+
42+
success, output = inference_client.inference({"key":value})
43+
```
44+
45+
## Async example
46+
As inference might take a couple of seconds to process (mostly depending on your model), you might want to call your model in an async way. To show how to do that, we will write a client which streams your primary webcam to your model.
47+
48+
We first capture the camera output using OpenCV, and then send the data to the model at a certain framerate:
49+
```python
50+
url = "wss://archipel-beta1.isquare.ai/43465956-8d6f-492f-ad45-91da69da44d0"
51+
access_key = "472f9457-072c-4a1a-800b-75ecdd6041e1"
52+
frame_rate = 15
53+
54+
async def main():
55+
"""Stream a webcam to the model."""
56+
cam = cv2.VideoCapture(0)
57+
prev = 0
58+
59+
async with I2Client(url, access_uuid) as client:
60+
while True:
61+
62+
time_elapsed = time.time() - prev
63+
check, frame = cam.read() # read the cam
64+
if time_elapsed < 1.0 / args.frame_rate:
65+
# force the webcam frame rate so the bottleneck is the
66+
# inference, not the camera performance.
67+
continue
68+
prev = time.time()
69+
outputs = await client.async_inference(frame)
70+
success, output = outputs[0]
71+
72+
if not success:
73+
raise RuntimeError(output)
74+
75+
# showing original and inference for image to image model
76+
concatenate_imgs = np.concatenate((frame, outputs[0]), axis=1)
77+
cv2.imshow("original / inference ", concatenate_imgs)
78+
79+
cam.release()
80+
cv2.destroyAllWindows()
81+
asyncio.run(main())
82+
83+
```
84+
You can easily stream any source to your model using this type of integration, as well as seemingly integrate your models in an async way, so that your code is completely independent of your model inference time.

setup.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -27,11 +27,12 @@
2727
"numpy>=1.19",
2828
"rich>=10.13",
2929
"websockets>=8.1",
30+
"opencv-python==4.6.0.66",
3031
],
3132
packages=find_packages(),
3233
entry_points="""
3334
[console_scripts]
34-
i2=i2_client:i2_cli
35+
i2py=i2_client:i2_cli
3536
""",
3637
python_requires=">=3.8",
3738
)

0 commit comments

Comments
 (0)