Skip to content

Commit b4b2e84

Browse files
[Feature] Multimodality (Vision and Text Inputs) (#3)
* Update package release action to activate with new tag * Add pytest to dev requirements for building project * Update tests to use pytest * Rename files so that pytest recognizes them * Add http 2 protocol support * Configure pytest dev requirements to include parallel testing and async testing * Add contributing guide * Update code docs for http2 protocol * Cleanup code * Add mutlimodal functionality * Integrate multimodality and functional query fns * Update tests for multimodality * Update base url * [PROJ] Format code with Black * Update docstrings * [PROJ] Format code with Black * Add file uploading capabilities * Separate query validation from query * Cleanup code and improve abstractions * Remove bugs * [PROJ] Format code with Black * Fix bugs * [PROJ] Format code with Black * Add s3 post uploading * [PROJ] Format code with Black * Download public urls as well * Update tests * Handle client closing, and server side errors gracefully * [PROJ] Format code with Black * Update cookbook * Enable backward compatibility for earlier Python versions * Add todo for updated README with new examples * [PROJ] Format code with Black * Update examples and test client service --------- Co-authored-by: GitHub Action <action@github.com>
1 parent f3603ed commit b4b2e84

15 files changed

Lines changed: 889 additions & 197 deletions

.github/workflows/release.yml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -79,7 +79,7 @@ jobs:
7979
GITHUB_TOKEN: ${{ github.token }}
8080
run: >-
8181
gh release create
82-
'{{ github.event.release.tag_name }}'
82+
'${{ github.event.release.tag_name }}'
8383
--repo '${{ github.repository }}'
8484
--notes ""
8585
- name: Upload artifact signatures to GitHub Release
@@ -90,5 +90,5 @@ jobs:
9090
# sigstore-produced signatures and certificates.
9191
run: >-
9292
gh release upload
93-
'{{ github.event.release.tag_name }}' dist/**
93+
'${{ github.event.release.tag_name }}' dist/**
9494
--repo '${{ github.repository }}'

.gitignore

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -165,4 +165,7 @@ cython_debug/
165165
.DS_Store
166166

167167
# Case Studies
168-
case_studies/
168+
case_studies/
169+
170+
# Testing files
171+
test.py

README.md

Lines changed: 35 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,7 @@
66
![Python](https://img.shields.io/badge/Python-3.10.14-blue)
77
[![License](https://img.shields.io/badge/License-MIT-green.svg)](https://opensource.org/licenses/MIT)
88

9+
<!-- TODO: Update examples -->
910
# $f$(👷‍♂️)
1011

1112
<a href="https://colab.research.google.com/github/WecoAI/weco-python/blob/main/examples/cookbook.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab" width=110 height=20/></a>
@@ -29,6 +30,7 @@ pip install weco
2930
- The **query** function allows you to test and use the newly created function in your own code.
3031
- We offer asynchronous versions of the above clients.
3132
- We provide a **batch_query** functions that allows users to batch functions for various inputs as well as multiple inputs for the same function in a query. This is helpful to make a large number of queries more efficiently.
33+
- We also offer multimodality capabilities. You can now query our client with both **language** AND **vision** inputs!
3234

3335
We provide both services in two ways:
3436
- `weco.WecoAI` client to be used when you want to maintain the same client service across a portion of code. This is better for dense service usage.
@@ -45,18 +47,48 @@ export WECO_API_KEY=<YOUR_WECO_API_KEY>
4547
## Example
4648

4749
We create a function on the [web console](https://weco-app.vercel.app/function) for the following task:
48-
> "I want to evaluate the feasibility of a machine learning task. Give me a json object with three keys - 'feasibility', 'justification', and 'suggestions'."
50+
> "Analyze a business idea and provide a structured evaluation. Output a JSON with 'viability_score' (0-100), 'strengths' (list), 'weaknesses' (list), and 'next_steps' (list)."
4951
5052
Now, you're ready to query this function anywhere in your code!
5153

5254
```python
5355
from weco import query
5456
response = query(
55-
fn_name=fn_name,
56-
fn_input="I want to train a model to predict house prices using the Boston Housing dataset hosted on Kaggle.",
57+
fn_name="BusinessIdeaAnalyzer-XYZ123", # Replace with your actual function name
58+
text_input="A subscription service for personalized, AI-generated bedtime stories for children."
5759
)
5860
```
5961

6062
For more examples and an advanced user guide, check out our function builder [cookbook](examples/cookbook.ipynb).
6163

6264
## Happy building $f$(👷‍♂️)!
65+
66+
## Contributing
67+
68+
We value your contributions! If you believe you can help to improve our package enabling people to build AI with AI, please contribute!
69+
70+
Use the following steps as a guideline to help you make contributions:
71+
72+
1. Download and install package from source:
73+
```bash
74+
git clone https://github.com/WecoAI/weco-python.git
75+
cd weco-python
76+
pip install -e ".[dev]"
77+
```
78+
79+
2. Create a new branch for your feature or bugfix:
80+
```bash
81+
git checkout -b feature/your-feature-name
82+
```
83+
84+
3. Make your changes and run tests to ensure everything is working:
85+
86+
> **Tests can be expensive to run as they make LLM requests with the API key being used so it is the developers best interests to write small and simple tests that adds coverage for a large portion of the package.**
87+
88+
```bash
89+
pytest -n auto tests
90+
```
91+
92+
4. Commit and push your changes, then open a PR for us to view 😁
93+
94+
Please ensure your code follows our style guidelines (Numpy docstrings) and includes appropriate tests. We appreciate your contributions!

examples/cookbook.ipynb

Lines changed: 131 additions & 32 deletions
Original file line numberDiff line numberDiff line change
@@ -49,7 +49,7 @@
4949
"cell_type": "markdown",
5050
"metadata": {},
5151
"source": [
52-
"You will need to export your API key which can be found [here](https://weco-app.vercel.app/account)."
52+
"Export your API key which can be found [here](https://weco-app.vercel.app/account)."
5353
]
5454
},
5555
{
@@ -65,11 +65,11 @@
6565
"cell_type": "markdown",
6666
"metadata": {},
6767
"source": [
68-
"You can build functions for complex systems quickly and without friction in just a few lines of code...For example you can create a function for the following task in the [web console](https://weco-app.vercel.app/function): \n",
68+
"You can build powerful AI functions for complex tasks quickly and without friction. For example, you can create a function in the [web console](https://weco-app.vercel.app/function) with this description:\n",
6969
"\n",
70-
"> \"I want to evaluate the feasibility of a machine learning task. Give me a json object with three keys - 'feasibility', 'justification', and 'suggestions'.\"\n",
70+
"> \"Analyze a business idea and provide a structured evaluation. Output a JSON with 'viability_score' (0-100), 'strengths' (list), 'weaknesses' (list), and 'next_steps' (list).\"\n",
7171
"\n",
72-
"Now, you're ready to query this function anywhere in your code!"
72+
"Once created, you can query this function anywhere in your code with just a few lines:"
7373
]
7474
},
7575
{
@@ -79,10 +79,108 @@
7979
"outputs": [],
8080
"source": [
8181
"from weco import query\n",
82+
"\n",
8283
"response = query(\n",
83-
" fn_name=<YOUR_FUNCTION_NAME>,\n",
84-
" fn_input=\"I want to train a model to predict house prices using the Boston Housing dataset hosted on Kaggle.\",\n",
85-
")"
84+
" fn_name=\"BusinessIdeaAnalyzer-XYZ123\", # Replace with your actual function name\n",
85+
" text_input=\"A subscription service for personalized, AI-generated bedtime stories for children.\"\n",
86+
")\n",
87+
"\n",
88+
"print(response)"
89+
]
90+
},
91+
{
92+
"cell_type": "markdown",
93+
"metadata": {},
94+
"source": [
95+
"## Multimodality"
96+
]
97+
},
98+
{
99+
"cell_type": "markdown",
100+
"metadata": {},
101+
"source": [
102+
"Our AI functions can interpret complex visual information, follow instructions in natural language and provide practical insights. Let's explore how we can all have a chef give us personalized advice! They can look at the food we have and offer recipe suggestions even providing nutritional information.\n",
103+
"As shown in the example above, you can provde the image input in various ways such as:\n",
104+
"1. Base64 encoding\n",
105+
"3. Public URL\n",
106+
"4. Local Path"
107+
]
108+
},
109+
{
110+
"cell_type": "code",
111+
"execution_count": null,
112+
"metadata": {},
113+
"outputs": [],
114+
"source": [
115+
"from weco import build, query\n",
116+
"import base64\n",
117+
"\n",
118+
"task_description = \"\"\"\n",
119+
"Create a Smart Home Energy Analyzer that can process images of smart meters, home exteriors, \n",
120+
"and indoor spaces to provide energy efficiency insights. The analyzer should:\n",
121+
" 1. Interpret smart meter readings\n",
122+
" 2. Assess home features relevant to energy consumption\n",
123+
" 3. Analyze thermostat settings\n",
124+
" 4. Provide energy-saving recommendations\n",
125+
" 5. Evaluate renewable energy potential\n",
126+
"\n",
127+
"The output should include:\n",
128+
" - 'energy_consumption': current usage and comparison to average\n",
129+
" - 'home_analysis': visible energy features and potential issues\n",
130+
" - 'thermostat_settings': current settings and recommendations\n",
131+
" - 'energy_saving_recommendations': actionable suggestions with estimated savings\n",
132+
" - 'renewable_energy_potential': assessment of current and potential renewable energy use\n",
133+
" - 'estimated_carbon_footprint': current footprint and potential reduction\n",
134+
"\"\"\"\n",
135+
"\n",
136+
"fn_name, _ = build(task_description=task_description)\n",
137+
"\n",
138+
"request = \"\"\"\n",
139+
"Analyze these images of my home and smart meter to provide energy efficiency insights \n",
140+
"and recommendations for reducing my electricity consumption.\n",
141+
"\"\"\"\n",
142+
"\n",
143+
"# Base64 encoded image\n",
144+
"with open(\"/path/to/home_exterior.jpeg\", \"rb\") as img_file:\n",
145+
" my_home_exterior = base64.b64encode(img_file.read()).decode('utf-8')\n",
146+
"\n",
147+
"response = query(\n",
148+
" fn_name=fn_name,\n",
149+
" text_input=request,\n",
150+
" images_input=[\n",
151+
" \"https://example.com/my_smart_meter_reading.png\", # Public URL\n",
152+
" f\"data:image/jpeg;base64,{my_home_exterior}\", # Base64 encoding\n",
153+
" \"/path/to/living_room_thermostat.jpg\" # Local image path\n",
154+
" ]\n",
155+
")\n",
156+
"\n",
157+
"print(response)"
158+
]
159+
},
160+
{
161+
"cell_type": "markdown",
162+
"metadata": {},
163+
"source": [
164+
"## Running Example"
165+
]
166+
},
167+
{
168+
"cell_type": "markdown",
169+
"metadata": {},
170+
"source": [
171+
"Consider the previous example of:\n",
172+
"> \"I want to evaluate the feasibility of a machine learning task. Help me understand this through - 'feasibility', 'justification', and 'suggestions'.\"\n",
173+
"\n",
174+
"Here's how you can take advantage of our API to best suit your needs."
175+
]
176+
},
177+
{
178+
"cell_type": "code",
179+
"execution_count": null,
180+
"metadata": {},
181+
"outputs": [],
182+
"source": [
183+
"task_description = \"I want to evaluate the feasibility of a machine learning task. Give me a json object with three keys - 'feasibility', 'justification', and 'suggestions'.\""
86184
]
87185
},
88186
{
@@ -108,15 +206,13 @@
108206
"from weco import build, query\n",
109207
"\n",
110208
"# Describe the task you want the function to perform\n",
111-
"fn_name, fn_desc = build(\n",
112-
" task_description=\"I want to evaluate the feasibility of a machine learning task. Give me a json object with three keys - 'feasibility', 'justification', and 'suggestions'.\",\n",
113-
")\n",
114-
"print(f\"Model Name: {fn_name}\\nModel Description:\\n{fn_desc}\")\n",
209+
"fn_name, fn_desc = build(task_description=task_description)\n",
210+
"print(f\"AI Function {fn_name} built. This does the following - \\n{fn_desc}.\")\n",
115211
"\n",
116212
"# Query the function with a specific input\n",
117213
"query_response = query(\n",
118214
" fn_name=fn_name,\n",
119-
" fn_input=\"I want to train a model to predict house prices using the Boston Housing dataset hosted on Kaggle.\",\n",
215+
" text_input=\"I want to train a model to predict house prices using the Boston Housing dataset hosted on Kaggle.\"\n",
120216
")\n",
121217
"for key, value in query_response.items(): print(f\"{key}: {value}\")"
122218
]
@@ -136,18 +232,13 @@
136232
"source": [
137233
"from weco import WecoAI\n",
138234
"\n",
235+
"# Connect to our service, using our client\n",
139236
"client = WecoAI()\n",
140237
"\n",
141-
"# Describe the task you want the function to perform\n",
142-
"fn_name, fn_desc = client.build(\n",
143-
" task_description=\"I want to evaluate the feasibility of a machine learning task. Give me a json object with three keys - 'feasibility', 'justification', and 'suggestions'.\"\n",
144-
")\n",
145-
"print(f\"Model Name: {fn_name}\\nModel Description:\\n{fn_desc}\")\n",
146-
"\n",
147-
"# Query the function with a specific input\n",
238+
"# Make the same query as before\n",
148239
"query_response = client.query(\n",
149240
" fn_name=fn_name,\n",
150-
" fn_input=\"I want to train a model to predict house prices using the Boston Housing dataset hosted on Kaggle.\",\n",
241+
" text_input=\"I want to train a model to predict house prices using the Boston Housing dataset hosted on Kaggle.\"\n",
151242
")\n",
152243
"for key, value in query_response.items(): print(f\"{key}: {value}\")"
153244
]
@@ -175,20 +266,30 @@
175266
"from weco import batch_query\n",
176267
"\n",
177268
"# Query the same function with multiple inputs by batching them for maximum efficiency\n",
269+
"input_1 = {\"text_input\": \"I want to train a model to predict house prices using the Boston Housing dataset hosted on Kaggle.\"}\n",
270+
"input_2 = {\n",
271+
" \"text_input\": \"I want to train a model to classify digits using the MNIST dataset hosted on Kaggle using a Google Colab notebook. Attached is an example of what some of the digits would look like.\",\n",
272+
" \"images_input\": [\"https://machinelearningmastery.com/wp-content/uploads/2019/02/Plot-of-a-Subset-of-Images-from-the-MNIST-Dataset-1024x768.png\"]\n",
273+
"}\n",
178274
"query_responses = batch_query(\n",
179-
" fn_names=YOUR_FUNCTION_NAME, # The name of the function we built in the previous step\n",
180-
" batch_inputs=[\n",
181-
" \"I want to train a model to predict house prices using the Boston Housing dataset hosted on Kaggle.\",\n",
182-
" \"I want to train a model to classify digits using the MNIST dataset hosted on Kaggle using a Google Colab notebook.\",\n",
183-
" ],\n",
275+
" fn_names=fn_name,\n",
276+
" batch_inputs=[input_1, input_2]\n",
184277
")"
185278
]
186279
},
187280
{
188281
"cell_type": "markdown",
189282
"metadata": {},
190283
"source": [
191-
"You can do the same using the `weco.WecoAI` client. If you wanted to batch different functions, you can pass a list of function names to the `batch_query()` function. Note that the names of functions would need to be ordered the same as the function inputs provided."
284+
"You can do the same using the `weco.WecoAI` client. If you wanted to batch different functions, you can pass a list of function names to the `batch_query()` function. Note that the names of functions would need to be ordered the same as the function inputs provided.\n",
285+
"\n",
286+
"In addition, `weco.batch_query` takes the input batch as an array of individual inputs formatted in the following way -\n",
287+
"```json\n",
288+
"{\n",
289+
" \"text_input\": \"Your text input\",\n",
290+
" \"images_input\": [\"image1\", \"image2\"]\n",
291+
"}\n",
292+
"```"
192293
]
193294
},
194295
{
@@ -202,7 +303,7 @@
202303
"cell_type": "markdown",
203304
"metadata": {},
204305
"source": [
205-
"Until now you've been making synchronous calls to our client by we also support asynchronous programmers. This is actually how we implement batching! You can also make asynchronous calls to our service using our `weco.WecoAI` client or as shown below:"
306+
"Until now you've been making synchronous calls to our client by we also support asynchronous programmers. This is actually how we implement batching! You can also make asynchronous calls to our service using our `weco.WecoAI` client or as shown below for the same example as before:"
206307
]
207308
},
208309
{
@@ -214,15 +315,13 @@
214315
"from weco import abuild, aquery\n",
215316
"\n",
216317
"# Describe the task you want the function to perform\n",
217-
"fn_name, fn_desc = await abuild(\n",
218-
" task_description=\"I want to evaluate the feasibility of a machine learning task. Give me a json object with three keys - 'feasibility', 'justification', and 'suggestions'.\",\n",
219-
")\n",
220-
"print(f\"Model Name: {fn_name}\\nModel Description:\\n{fn_desc}\")\n",
318+
"fn_name, fn_desc = await abuild(task_description=task_description)\n",
319+
"print(f\"AI Function {fn_name} built. This does the following - \\n{fn_desc}.\")\n",
221320
"\n",
222321
"# Query the function with a specific input\n",
223322
"query_response = await aquery(\n",
224323
" fn_name=fn_name,\n",
225-
" fn_input=\"I want to train a model to predict house prices using the Boston Housing dataset hosted on Kaggle.\",\n",
324+
" text_input=\"I want to train a model to predict house prices using the Boston Housing dataset hosted on Kaggle.\"\n",
226325
")\n",
227326
"for key, value in query_response.items(): print(f\"{key}: {value}\")"
228327
]

pyproject.toml

Lines changed: 4 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -10,20 +10,11 @@ authors = [
1010
]
1111
description = "A client facing API for interacting with the WeCo AI function builder service."
1212
readme = "README.md"
13-
version = "0.1.3"
13+
version = "0.1.4"
1414
license = {text = "MIT"}
1515
requires-python = ">=3.8"
16-
dependencies = [
17-
"asyncio",
18-
"httpx[http2]"
19-
]
20-
keywords = [
21-
"AI",
22-
"LLM",
23-
"machine learning",
24-
"data science",
25-
"function builder"
26-
]
16+
dependencies = ["asyncio", "httpx[http2]", "pillow"]
17+
keywords = ["AI", "LLM", "machine learning", "data science", "function builder", "AI function"]
2718
classifiers = [
2819
"Programming Language :: Python :: 3",
2920
"Operating System :: OS Independent",
@@ -34,7 +25,7 @@ classifiers = [
3425
Homepage = "https://github.com/WecoAI/weco-python"
3526

3627
[project.optional-dependencies]
37-
dev = ["build", "setuptools_scm", "flake8", "flake8-pyproject", "black", "isort"]
28+
dev = ["flake8", "flake8-pyproject", "black", "isort", "pytest-asyncio", "pytest-xdist", "build", "setuptools_scm"]
3829

3930
[tool.setuptools]
4031
packages = ["weco"]

tests/asynchronous.py

Lines changed: 0 additions & 28 deletions
This file was deleted.

0 commit comments

Comments
 (0)