Skip to content

Commit 87d79ef

Browse files
committed
Added doc for examples
1 parent 529da95 commit 87d79ef

2 files changed

Lines changed: 85 additions & 1 deletion

File tree

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -65,7 +65,7 @@ img = cv2.imread("test.jpg")
6565

6666
# Initialize the client & perform inference
6767
inference_client = I2Client(url,access_key)
68-
success, output = i2_client.inference(img)[0]
68+
success, output = inference_client.inference(img)[0]
6969
```
7070

7171
A more complex example, showing how to stream a camera to your model, can be found [here](examples/webcam_stream.py)

examples/README.md

Lines changed: 84 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,84 @@
1+
# Examples
2+
This directory shows 3 sample integrations of the [ISquare](isquare.ai) client for image inference, with 3 levels of complexity:
3+
- How to perform inference with an image
4+
- How to perform inference with a video
5+
- How to stream a camera to your model
6+
7+
## Simple inference
8+
First, we'll look at how to perform a simple inference with an image file. To start, we need to import our libraries and initialize the client:
9+
```python
10+
from i2_client import I2Client
11+
import cv2
12+
import numpy as np
13+
14+
# You need your url, access key and an image
15+
url = "wss://archipel-beta1.isquare.ai/43465956-8d6f-492f-ad45-91da69da44d0"
16+
access_key = "472f9457-072c-4a1a-800b-75ecdd6041e1"
17+
18+
inference_client = I2Client(url,access_key)
19+
20+
```
21+
Then, we load the image using OpenCV and verify it is loaded correctly
22+
```python
23+
img = cv2.imread("test.jpg")
24+
if img is None:
25+
raise FileNotFoundError("invalid image")
26+
```
27+
Finally, we just have to call our model using the client. If using an image to image model, we can show the original and the saved image next to each other:
28+
```python
29+
success, output = inference_client.inference(img)[0]
30+
concatenate_imgs = np.concatenate((img, output), axis=1)
31+
cv2.imshow("original / inference ", concatenate_imgs)
32+
```
33+
And that's it for the simple usage of the client. Our client currently supports strings, numpy arrays, and any python dictionary objects, as long as they are numpy serialisable. If you have a sentiment analysis model for text, your inference could look like the following:
34+
35+
```python
36+
37+
success, output = inference_client.inference("It's a rainy summer day")
38+
```
39+
or, for a dictionary:
40+
```python
41+
42+
success, output = inference_client.inference({"key":value})
43+
```
44+
45+
## Async example
46+
As inference might take a couple of seconds to process (mostly depending on your model), you might want to call your model in an async way. To show how to do that, we will write a client which streams your primary webcam to your model.
47+
48+
We first capture the camera output using OpenCV, and then send the data to the model at a certain framerate:
49+
```python
50+
url = "wss://archipel-beta1.isquare.ai/43465956-8d6f-492f-ad45-91da69da44d0"
51+
access_key = "472f9457-072c-4a1a-800b-75ecdd6041e1"
52+
frame_rate = 15
53+
54+
async def main():
55+
"""Stream a webcam to the model."""
56+
cam = cv2.VideoCapture(0)
57+
prev = 0
58+
59+
async with I2Client(url, access_uuid) as client:
60+
while True:
61+
62+
time_elapsed = time.time() - prev
63+
check, frame = cam.read() # read the cam
64+
if time_elapsed < 1.0 / args.frame_rate:
65+
# force the webcam frame rate so the bottleneck is the
66+
# inference, not the camera performance.
67+
continue
68+
prev = time.time()
69+
outputs = await client.async_inference(frame)
70+
success, output = outputs[0]
71+
72+
if not success:
73+
raise RuntimeError(output)
74+
75+
# showing original and inference for image to image model
76+
concatenate_imgs = np.concatenate((frame, outputs[0]), axis=1)
77+
cv2.imshow("original / inference ", concatenate_imgs)
78+
79+
cam.release()
80+
cv2.destroyAllWindows()
81+
asyncio.run(main())
82+
83+
```
84+
You can easily stream any source to your model using this type of integration, as well as seemingly integrate your models in an async way, so that your code is completely independent of your model inference time.

0 commit comments

Comments
 (0)