|
1 | | -# SLASHML Python client |
| 1 | +# SlashML Python client |
| 2 | +[SlashML](https://www.slashml.com/) |
| 3 | +# Introduction |
| 4 | +## Overview |
| 5 | +This is a Python client (SDK) for SlashML. It allows the user to use the apps available and active in [SlashML-Dashboard](https://www.slashml.com/dashboard). |
| 6 | +The apps can be together in the same code. For example, if the user wants to transcribe an audio file and get a summary, they can call speechtotext followed by summarization. |
2 | 7 |
|
3 | | -This is a Python client for SLASHML. It lets you run transcription jobs from your Python code or Jupyter notebook. Do a transcription job with three lines of code |
4 | | -``` |
5 | | -import speechtotext |
| 8 | +State-of-the-art AI models from several service providers are available. At SlashML, we also do the benchmarking on these models for the user. This will give them an idea of the best service provider for their application. For the full list of the available models thru SlashML, click [here](##availlable-service-providers) |
6 | 9 |
|
7 | | -speect_to_text = speechtotext.SpeechToText() |
8 | | -transcribe_id= speect_to_text.transcribe(audio_url,service_provider="aws") |
9 | | -status=speect_to_text.status(transcribe_id,service_provider=service_provider) |
10 | 10 |
|
11 | | -``` |
12 | | -There is a daily limit (throttling) on the number of calls the user performs, transcription jobs can be done without specifying a token (API key). If the user intends on using the service more frequently, it is recommended to generate an token or API key from the dashboard @ [Slashml.com](https://www.slashml.com/). |
13 | 11 |
|
14 | | -Grab your token from [https://www.slashml.com/dashboard] (>settings> new api key) and authenticate by setting it as an environment variable (or when you initialize the service, see examples): |
| 12 | +to lets you run transcription jobs from your Python code or Jupyter notebook. The transcription can be done with three lines of code |
15 | 13 | ``` |
16 | | -export SLASHML_API_KEY=[token] |
17 | | -``` |
18 | | -or including it in your code as follows: |
19 | | -``` |
20 | | -import speechtotext |
21 | | -API_KEY="your_api_key" |
22 | | -speect_to_text = speechtotext.SpeechToText(API_KEY) |
| 14 | +import slashml |
| 15 | +
|
| 16 | +speect_to_text = slashml.SpeechToText() |
23 | 17 | transcribe_id= speect_to_text.transcribe(audio_url,service_provider="aws") |
24 | 18 | status=speect_to_text.status(transcribe_id,service_provider=service_provider) |
25 | 19 |
|
26 | 20 | ``` |
27 | 21 |
|
28 | | --- update from this part, include examples, sign up, token, service providers, type of servies, benchmarking, link to pricing, Tutorial examples/examples |
| 22 | +## Set up and usage |
| 23 | +There is a daily limit (throttling) on the number of calls the user performs, the code runs without specifying a token (API key), the throttling kicks in and prevents new jobs after exceeding 10 calls per minute. |
29 | 24 |
|
| 25 | +If the user intends on using the service more frequently, it is recommended to generate an token or API key from [SlashML-Dashboard](https://www.slashml.com/dashboard). This way, the throttling limit will increase. |
30 | 26 |
|
31 | | -SDK for SlashML documentation: |
32 | | -- methods: upload_audio, transcribe, status |
| 27 | +Sign up and Grab your token from [SlashML-Dashboard](https://www.slashml.com/dashboard)>settings> new api key and authenticate by setting it as an environment variable (or when you initialize the service, see Quickstart tutorial): |
33 | 28 |
|
34 | | -Steps to Integrate |
35 | | -1 - (Optional) Upload files where the data points to your audio file |
| 29 | +In your terminal |
36 | 30 | ``` |
37 | | -# call the class |
38 | | -speect_to_text = speechtotext.SpeechToText() |
39 | | -file_location="path/to/your/file.mp3" |
40 | | -# when |
41 | | -API_KEY="SLASH_ML_API_KEY" |
42 | | -model_choice="assembly" |
43 | | -result_upload = speect_to_text.upload_audio(file_location,API_KEY, model_choice) |
44 | | -print(result_upload) |
| 31 | +export SLASHML_API_KEY=[token] |
| 32 | +``` |
| 33 | +or including it in your python code as follows: |
45 | 34 | ``` |
46 | | -Save the upload_url. You can use this url link in the rest of the calls. |
47 | 35 |
|
| 36 | +import os |
| 37 | +os.environ["SLASHML_API_KEY"] = "slashml_api_token" |
48 | 38 |
|
49 | | -2- Submit your audio file for transcription |
50 | 39 | ``` |
51 | | -upload_url=upload_url # you can skip step 1 and just input the accessible link of your # file) |
52 | 40 |
|
53 | | -result_transcribe = speect_to_text.transcribe(upload_url,API_KEY, model_choice) |
| 41 | +If the user preferes using the API calls directly, the documentation is available [here](https://www.slashml.com/dashboard). |
| 42 | +## Availlable service providers |
54 | 43 |
|
55 | | -print(result_transcribe) |
56 | | -``` |
57 | | -Save the id in the response object. |
| 44 | +### Speech-to-text |
58 | 45 |
|
| 46 | +AssemblyAI |
| 47 | +AWS |
| 48 | +Whisper (OpenAI) |
59 | 49 |
|
60 | | -3 - Check the status and get the text result of the transcription |
61 | | -``` |
62 | | -job_id= id |
63 | | -result_status = speect_to_text.status(job_id,API_KEY, model_choice=model_choice) |
| 50 | +### Summarization |
| 51 | +hugging-face summarization based on Meta... include links |
| 52 | +# Quickstart tutorial |
64 | 53 |
|
65 | | -### get the full details of the result |
66 | | -print(result_status) |
67 | | -### get the text reulst only |
68 | | -print(json.loads(result)["text"]) |
69 | | -``` |
| 54 | +## Introduction |
| 55 | + |
| 56 | +### Start with initializing the service |
| 57 | + |
| 58 | +### Specify your service provider |
| 59 | + |
| 60 | +In this step, benchmarking will help you decide which service provider is the best for you. |
| 61 | +For speech-to-text: "assembly", "aws", "whisper" |
| 62 | +For summarization: "hugging-face", "openai" |
70 | 63 |
|
71 | 64 |
|
72 | 65 | et voilà, Next steps: |
73 | 66 | - pip install slashml |
74 | | -- add SLASH_API_KEY to sys path |
0 commit comments