Skip to content

Commit 8222b6e

Browse files
authored
chore: update skills pre 0.8 update (#1010)
## Description Updates skills to match the current state of the code. ### Introduces a breaking change? - [ ] Yes - [ ] No ### Type of change - [ ] Bug fix (change which fixes an issue) - [ ] New feature (change which adds functionality) - [ ] Documentation update (improves or adds clarity to existing documentation) - [x] Other (chores, tests, code style improvements etc.) ### Tested on - [ ] iOS - [ ] Android ### Testing instructions <!-- Provide step-by-step instructions on how to test your changes. Include setup details if necessary. --> ### Screenshots <!-- Add screenshots here, if applicable --> ### Related issues <!-- Link related issues here using #issue-number --> ### Checklist - [ ] I have performed a self-review of my code - [ ] I have commented my code, particularly in hard-to-understand areas - [ ] I have updated the documentation accordingly - [ ] My changes generate no new warnings ### Additional notes <!-- Include any additional information, assumptions, or context that reviewers might need to understand this PR. -->
1 parent c2df447 commit 8222b6e

9 files changed

Lines changed: 278 additions & 124 deletions

File tree

.cspell-wordlist.txt

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -181,3 +181,5 @@ nˈɛvəɹ
181181
ˈɛls
182182
Synchronizable
183183
stringifying
184+
hɛloʊ
185+
wɜːld

skills/canary/react-native-executorch/references/core-utilities.md

Lines changed: 58 additions & 31 deletions
Original file line numberDiff line numberDiff line change
@@ -88,15 +88,33 @@ const runInference = async () => {
8888

8989
**Use cases:** Download management, storage cleanup, progress tracking, offline-first apps.
9090

91+
## Built-in adapters vs custom adapter
92+
93+
React Native ExecuTorch does not bundle a resource fetcher — you bring your own. Two ready-made adapters are provided:
94+
95+
- **`ExpoResourceFetcher`** from `react-native-executorch-expo-resource-fetcher` — for Expo projects
96+
- **`BareResourceFetcher`** from `react-native-executorch-bare-resource-fetcher` — for bare React Native projects
97+
98+
Register the adapter once at app startup before using any hooks or modules:
99+
100+
```typescript
101+
import { initExecutorch } from 'react-native-executorch';
102+
import { ExpoResourceFetcher } from 'react-native-executorch-expo-resource-fetcher';
103+
104+
initExecutorch({ resourceFetcher: ExpoResourceFetcher });
105+
```
106+
107+
If neither adapter fits your needs (custom download library, private server, custom caching), you can implement the `ResourceFetcherAdapter` interface yourself. See [Custom Adapter](https://docs.swmansion.com/react-native-executorch/docs/resource-fetcher/custom-adapter) for details.
108+
91109
## Basic Usage
92110

93111
```typescript
94-
import { ResourceFetcher } from 'react-native-executorch';
112+
import { ExpoResourceFetcher } from 'react-native-executorch-expo-resource-fetcher';
95113

96114
// Download multiple resources with progress tracking
97115
const downloadModels = async () => {
98116
try {
99-
const uris = await ResourceFetcher.fetch(
117+
const uris = await ExpoResourceFetcher.fetch(
100118
(progress) =>
101119
console.log(`Download progress: ${(progress * 100).toFixed(1)}%`),
102120
'https://example.com/llama3_2.pte',
@@ -117,22 +135,22 @@ const downloadModels = async () => {
117135
## Pause and Resume Downloads
118136

119137
```typescript
120-
import { ResourceFetcher } from 'react-native-executorch';
138+
import { ExpoResourceFetcher } from 'react-native-executorch-expo-resource-fetcher';
121139

122-
const uris = ResourceFetcher.fetch(
140+
const uris = ExpoResourceFetcher.fetch(
123141
(progress) => console.log('Total progress:', progress),
124142
'https://.../llama3_2.pte',
125143
'https://.../qwen3.pte'
126144
).then((uris) => {
127-
console.log('URI resolved as: ', uris); // since we pause the fetch, uris is resolved to null
145+
console.log('URI resolved as: ', uris); // null, since we paused
128146
});
129147

130-
await ResourceFetcher.pauseFetching(
148+
await ExpoResourceFetcher.pauseFetching(
131149
'https://.../llama3_2.pte',
132150
'https://.../qwen3.pte'
133151
);
134152

135-
const resolvedUris = await ResourceFetcher.resumeFetching(
153+
const resolvedUris = await ExpoResourceFetcher.resumeFetching(
136154
'https://.../llama3_2.pte',
137155
'https://.../qwen3.pte'
138156
);
@@ -141,17 +159,17 @@ const resolvedUris = await ResourceFetcher.resumeFetching(
141159
## Cancel Downloads
142160

143161
```typescript
144-
import { ResourceFetcher } from 'react-native-executorch';
162+
import { ExpoResourceFetcher } from 'react-native-executorch-expo-resource-fetcher';
145163

146-
const uris = ResourceFetcher.fetch(
164+
const uris = ExpoResourceFetcher.fetch(
147165
(progress) => console.log('Total progress:', progress),
148166
'https://.../llama3_2.pte',
149167
'https://.../qwen3.pte'
150168
).then((uris) => {
151-
console.log('URI resolved as: ', uris); // since we cancel the fetch, uris is resolved to null
169+
console.log('URI resolved as: ', uris); // null, since we cancelled
152170
});
153171

154-
await ResourceFetcher.cancelFetching(
172+
await ExpoResourceFetcher.cancelFetching(
155173
'https://.../llama3_2.pte',
156174
'https://.../qwen3.pte'
157175
);
@@ -160,22 +178,22 @@ await ResourceFetcher.cancelFetching(
160178
## Manage Downloaded Resources
161179

162180
```typescript
163-
import { ResourceFetcher } from 'react-native-executorch';
181+
import { ExpoResourceFetcher } from 'react-native-executorch-expo-resource-fetcher';
164182

165183
// List all downloaded files
166184
const listFiles = async () => {
167-
const files = await ResourceFetcher.listDownloadedFiles();
185+
const files = await ExpoResourceFetcher.listDownloadedFiles();
168186
console.log('All downloaded files:', files);
169187

170-
const models = await ResourceFetcher.listDownloadedModels();
188+
const models = await ExpoResourceFetcher.listDownloadedModels();
171189
console.log('Model files:', models);
172190
};
173191

174192
// Clean up old resources
175193
const cleanup = async () => {
176-
const oldModelUrl = 'https://example.com/old_model.pte';
177-
178-
await ResourceFetcher.deleteResources(oldModelUrl);
194+
await ExpoResourceFetcher.deleteResources(
195+
'https://example.com/old_model.pte'
196+
);
179197
console.log('Old model deleted');
180198
};
181199
```
@@ -195,11 +213,12 @@ Resources can be:
195213
**Progress callback:** Progress is reported as 0-1 for all downloads combined.
196214
**Null return:** If `fetch()` returns `null`, download was paused or cancelled.
197215
**Network errors:** Implement retry logic with exponential backoff for reliability.
198-
**Storage location:** Downloaded files are stored in application's document directory under `react-native-executorch/`
216+
**Pause/resume on Android:** `BareResourceFetcher` does not support pause/resume on Android. Use `ExpoResourceFetcher` if you need this on Android.
199217

200218
## Additional references
201219

202-
- [ResourceFetcher full reference docs](https://docs.swmansion.com/react-native-executorch/docs/utilities/resource-fetcher)
220+
- [ResourceFetcher usage docs](https://docs.swmansion.com/react-native-executorch/docs/resource-fetcher/usage)
221+
- [Custom Adapter docs](https://docs.swmansion.com/react-native-executorch/docs/resource-fetcher/custom-adapter)
203222
- [Loading Models guide](https://docs.swmansion.com/react-native-executorch/docs/fundamentals/loading-models)
204223

205224
---
@@ -220,13 +239,13 @@ import {
220239
RnExecutorchErrorCode,
221240
} from 'react-native-executorch';
222241

223-
const llm = new LLMModule({
224-
tokenCallback: (token) => console.log(token),
225-
messageHistoryCallback: (messages) => console.log(messages),
226-
});
227-
228242
try {
229-
await llm.load(LLAMA3_2_1B_QLORA, (progress) => console.log(progress));
243+
const llm = await LLMModule.fromModelName(
244+
LLAMA3_2_1B_QLORA,
245+
(progress) => console.log(progress),
246+
(token) => console.log(token),
247+
(messages) => console.log(messages)
248+
);
230249
await llm.sendMessage('Hello!');
231250
} catch (err) {
232251
if (err instanceof RnExecutorchError) {
@@ -242,21 +261,27 @@ try {
242261

243262
```typescript
244263
import {
264+
LLMModule,
265+
LLAMA3_2_1B_QLORA,
245266
RnExecutorchError,
246267
RnExecutorchErrorCode,
247268
} from 'react-native-executorch';
248269

249-
const handleModelError = async (llm, message: string) => {
270+
const llm = await LLMModule.fromModelName(
271+
LLAMA3_2_1B_QLORA,
272+
(progress) => console.log(progress),
273+
(token) => console.log(token),
274+
(messages) => console.log(messages)
275+
);
276+
277+
const handleModelError = async (message: string) => {
250278
try {
251279
await llm.sendMessage(message);
252280
} catch (err) {
253281
if (err instanceof RnExecutorchError) {
254282
switch (err.code) {
255283
case RnExecutorchErrorCode.ModuleNotLoaded:
256-
console.error('Model not loaded. Loading now...');
257-
await llm.load(LLAMA3_2_1B_QLORA);
258-
// Retry the message
259-
await llm.sendMessage(message);
284+
console.error('Model not loaded.');
260285
break;
261286

262287
case RnExecutorchErrorCode.ModelGenerating:
@@ -267,7 +292,9 @@ const handleModelError = async (llm, message: string) => {
267292
case RnExecutorchErrorCode.InvalidConfig:
268293
console.error('Invalid configuration:', err.message);
269294
// Reset to default config
270-
await llm.configure({ topp: 0.9, temperature: 0.7 });
295+
await llm.configure({
296+
generationConfig: { topp: 0.9, temperature: 0.7 },
297+
});
271298
break;
272299

273300
default:

skills/canary/react-native-executorch/references/reference-audio.md

Lines changed: 34 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -225,8 +225,7 @@ const model = useTextToSpeech({
225225
const audioContext = new AudioContext({ sampleRate: 24000 });
226226

227227
const handleSpeech = async (text: string) => {
228-
const speed = 1.0;
229-
const waveform = await model.forward(text, speed);
228+
const waveform = await model.forward({ text, speed: 1.0 });
230229

231230
const audioBuffer = audioContext.createBuffer(1, waveform.length, 24000);
232231
audioBuffer.getChannelData(0).set(waveform);
@@ -242,21 +241,43 @@ const handleSpeech = async (text: string) => {
242241

243242
```typescript
244243
// Stream chunks for lower latency
245-
await tts.stream({
244+
await model.stream({
246245
text: 'Long text to be streamed chunk by chunk...',
247246
speed: 1.0,
247+
onBegin: async () => console.log('Streaming started'),
248248
onNext: async (chunk) => {
249249
return new Promise((resolve) => {
250-
const buffer = ctx.createBuffer(1, chunk.length, 24000);
250+
const buffer = audioContext.createBuffer(1, chunk.length, 24000);
251251
buffer.getChannelData(0).set(chunk);
252252

253-
const source = ctx.createBufferSource();
253+
const source = audioContext.createBufferSource();
254254
source.buffer = buffer;
255-
source.connect(ctx.destination);
255+
source.connect(audioContext.destination);
256256
source.onEnded = () => resolve();
257257
source.start();
258258
});
259259
},
260+
onEnd: async () => console.log('Streaming finished'),
261+
stopAutomatically: true,
262+
});
263+
```
264+
265+
## Phoneme-based synthesis
266+
267+
If you have pre-computed phonemes, use `forwardFromPhonemes` or `streamFromPhonemes` to skip the text-to-phoneme step:
268+
269+
```typescript
270+
const waveform = await model.forwardFromPhonemes({
271+
phonemes: 'hɛloʊ',
272+
speed: 1.0,
273+
});
274+
275+
await model.streamFromPhonemes({
276+
phonemes: 'hɛloʊ wɜːld',
277+
speed: 1.0,
278+
onNext: async (chunk) => {
279+
/* play chunk */
280+
},
260281
});
261282
```
262283

@@ -268,17 +289,14 @@ For all available models check out [this exported HuggingFace models collection]
268289

269290
**Available Voices:**
270291

271-
- `KOKORO_VOICE_AF_HEART` - Female, heart
272-
- `KOKORO_VOICE_AF_SKY` - Female, sky
273-
- `KOKORO_VOICE_AF_BELLA` - Female, bella
274-
- `KOKORO_VOICE_AF_NICOLE` - Female, nicole
275-
- `KOKORO_VOICE_AF_SARAH` - Female, sarah
276-
- `KOKORO_VOICE_AM_ADAM` - Male, adam
277-
- `KOKORO_VOICE_AM_MICHAEL` - Male, michael
292+
- `KOKORO_VOICE_AF_HEART` - American Female, heart
293+
- `KOKORO_VOICE_AF_RIVER` - American Female, river
294+
- `KOKORO_VOICE_AF_SARAH` - American Female, sarah
295+
- `KOKORO_VOICE_AM_ADAM` - American Male, adam
296+
- `KOKORO_VOICE_AM_MICHAEL` - American Male, michael
297+
- `KOKORO_VOICE_AM_SANTA` - American Male, santa
278298
- `KOKORO_VOICE_BF_EMMA` - British Female, emma
279-
- `KOKORO_VOICE_BF_ISABELLA` - British Female, isabella
280-
- `KOKORO_VOICE_BM_GEORGE` - British Male, george
281-
- `KOKORO_VOICE_BM_LEWIS` - British Male, lewis
299+
- `KOKORO_VOICE_BM_DANIEL` - British Male, daniel
282300

283301
## Troubleshooting
284302

skills/canary/react-native-executorch/references/reference-cv-2.md

Lines changed: 11 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,11 @@ const model = useStyleTransfer({ model: STYLE_TRANSFER_CANDY });
2222
const imageUri = 'file:///Users/.../photo.png';
2323

2424
try {
25-
const generatedImageUrl = await model.forward(imageUri);
25+
// Returns PixelData (raw RGB buffer) by default
26+
const pixelData = await model.forward(imageUri);
27+
28+
// Pass 'url' as second argument to get a file URI instead
29+
const generatedImageUrl = await model.forward(imageUri, 'url');
2630
console.log('Styled image:', generatedImageUrl);
2731
} catch (error) {
2832
console.error(error);
@@ -33,10 +37,10 @@ try {
3337

3438
**Model constants:**
3539

36-
- `STYLE_TRANSFER_CANDY` - Candy artistic style
37-
- `STYLE_TRANSFER_MOSAIC` - Mosaic artistic style
38-
- `STYLE_TRANSFER_UDNIE` - Udnie artistic style
39-
- `STYLE_TRANSFER_RAIN_PRINCESS` - Rain princess artistic style
40+
- `STYLE_TRANSFER_CANDY` / `STYLE_TRANSFER_CANDY_QUANTIZED`
41+
- `STYLE_TRANSFER_MOSAIC` / `STYLE_TRANSFER_MOSAIC_QUANTIZED`
42+
- `STYLE_TRANSFER_UDNIE` / `STYLE_TRANSFER_UDNIE_QUANTIZED`
43+
- `STYLE_TRANSFER_RAIN_PRINCESS` / `STYLE_TRANSFER_RAIN_PRINCESS_QUANTIZED`
4044

4145
For the latest available models reference exported models in [HuggingFace Style Transfer collection](https://huggingface.co/collections/software-mansion/style-transfer)
4246

@@ -101,7 +105,7 @@ function App() {
101105
}
102106
```
103107

104-
**Model constants:** `BK_SDM_TINY_VPRED_256`
108+
**Model constants:** `BK_SDM_TINY_VPRED_256`, `BK_SDM_TINY_VPRED_512`
105109

106110
For the latest available models reference exported models in [HuggingFace Text to Image collection](https://huggingface.co/collections/software-mansion/text-to-image)
107111

@@ -173,7 +177,7 @@ try {
173177

174178
## Available Models
175179

176-
**Model constants:** `CLIP_VIT_BASE_PATCH32_IMAGE`
180+
**Model constants:** `CLIP_VIT_BASE_PATCH32_IMAGE`, `CLIP_VIT_BASE_PATCH32_IMAGE_QUANTIZED`
177181

178182
For the latest available models reference exported models in [HuggingFace Image Embeddings collection](https://huggingface.co/collections/software-mansion/image-embeddings)
179183

0 commit comments

Comments
 (0)