Skip to content
2 changes: 1 addition & 1 deletion docs/docs/Agents/mcp-client.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -100,7 +100,7 @@ To leverage flows-as-tools, use the **MCP Tools** component to connect to a proj
| tool | String | Input parameter. The specific tool to execute from the connected MCP server. Leave blank to allow access to all tools. |
| use_cache | Boolean | Input parameter. Enable caching of MCP server and tools to improve performance. Default: `false`. |
| verify_ssl | Boolean | Input parameter. Enable SSL certificate verification for HTTPS connections. Default: `true`. |
| response | DataFrame | Output parameter. [`DataFrame`](/data-types#dataframe) containing the response from the executed tool. |
| response | Table | Output parameter. [`Table`](/data-types#table) containing the response from the executed tool. |

## Manage connected MCP servers

Expand Down
2 changes: 1 addition & 1 deletion docs/docs/Components/api-request.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ The **API Request** component constructs and sends HTTP requests using URLs or c

You can enable additional request options and fields in the component's parameters.

Returns a [`Data` object](/data-types#data) containing the response.
Returns a [`JSON` object](/data-types#json) containing the response.

For provider-specific API components, see <Icon name="Blocks" aria-hidden="true" /> [**Bundles**](/components-bundle-components).

Expand Down
18 changes: 9 additions & 9 deletions docs/docs/Components/batch-run.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -9,12 +9,12 @@ import TabItem from '@theme/TabItem';
import PartialParams from '@site/docs/_partial-hidden-params.mdx';
import PartialDevModeWindows from '@site/docs/_partial-dev-mode-windows.mdx';

The **Batch Run** component runs a language model over _each row of one text column_ in a [`DataFrame`](/data-types#dataframe), and then returns a new `DataFrame` with the original text and an LLM response.
The **Batch Run** component runs a language model over _each row of one text column_ in a [`Table`](/data-types#table), and then returns a new `Table` with the original text and an LLM response.
The output contains the following columns:

* `text_input`: The original text from the input `DataFrame`
* `text_input`: The original text from the input `Table`
* `model_response`: The model's response for each input
* `batch_index`: The 0-indexed processing order for all rows in the `DataFrame`
* `batch_index`: The 0-indexed processing order for all rows in the `Table`
* `metadata` (optional): Additional information about the processing

## Use the Batch Run component in a flow
Expand All @@ -26,26 +26,26 @@ This is demonstrated in the following example.

1. Connect any language model component to a **Batch Run** component's **Language model** port.

2. Connect `DataFrame` output from another component to the **Batch Run** component's **DataFrame** input.
2. Connect `Table` output from another component to the **Batch Run** component's **DataFrame** input.
For example, you could connect a **Read File** component with a CSV file.

3. In the **Batch Run** component's **Column Name** field, enter the name of the column in the incoming `DataFrame` that contains the text to process.
3. In the **Batch Run** component's **Column Name** field, enter the name of the column in the incoming `Table` that contains the text to process.
For example, if you want to extract text from a `name` column in a CSV file, enter `name` in the **Column Name** field.

4. Connect the **Batch Run** component's **Batch Results** output to a **Parser** component's **DataFrame** input.

5. Optional: In the **Batch Run** [component menu](/concepts-components#component-menus), enable the **System Message** parameter, click **Close**, and then enter an instruction for how you want the LLM to process each cell extracted from the file.
For example, `Create a business card for each name.`

6. In the **Parser** component's **Template** field, enter a template for processing the **Batch Run** component's new `DataFrame` columns (`text_input`, `model_response`, and `batch_index`):
6. In the **Parser** component's **Template** field, enter a template for processing the **Batch Run** component's new `Table` columns (`text_input`, `model_response`, and `batch_index`):

For example, this template uses three columns from the resulting, post-batch `DataFrame`:
For example, this template uses three columns from the resulting, post-batch `Table`:

```text
record_number: {batch_index}, name: {text_input}, summary: {model_response}
```

7. To test the processing, click the **Parser** component, click <Icon name="Play" aria-hidden="true" /> **Run component**, and then click <Icon name="TextSearch" aria-hidden="true" /> **Inspect output** to view the final `DataFrame`.
7. To test the processing, click the **Parser** component, click <Icon name="Play" aria-hidden="true" /> **Run component**, and then click <Icon name="TextSearch" aria-hidden="true" /> **Inspect output** to view the final `Table`.

You can also connect a **Chat Output** component to the **Parser** component if you want to see the output in the **Playground**.

Expand All @@ -61,5 +61,5 @@ For example, `Create a business card for each name.`
| column_name | MessageTextInput | Input parameter. The name of the DataFrame column to treat as text messages. If empty, all columns are formatted in TOML. |
| output_column_name | MessageTextInput | Input parameter. Name of the column where the model's response is stored. Default=`model_response`. |
| enable_metadata | BoolInput | Input parameter. If `True`, add metadata to the output DataFrame. |
| batch_results | DataFrame | Output parameter. A DataFrame with all original columns plus the model's response column. |
| batch_results | Table | Output parameter. A DataFrame with all original columns plus the model's response column. |

6 changes: 3 additions & 3 deletions docs/docs/Components/bundles-agentics.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ For example, this schema definition creates the following DataFrame output:
| Name | Type | Description |
|------|------|-------------|
| Language Model | Dropdown | Select the LLM provider and model. Use guided experience. |
| Input DataFrame | DataFrame | Optional. Example DataFrame to learn from; only first 50 rows used. If not provided, Schema is used. |
| Input DataFrame | Table | Optional. Example DataFrame to learn from; only first 50 rows used. If not provided, Schema is used. |
| Schema | Table | Define columns to generate when no Input DataFrame is provided. See the component's schema definition. |
| Instructions | String | Optional instructions for generation. |
| Number of Rows to Generate | Integer | How many synthetic rows to create. Default: 10. |
Expand Down Expand Up @@ -97,7 +97,7 @@ For example, **aMap** keeps each input row and fills in `sentiment`, `confidence
| Name | Type | Description |
|------|------|-------------|
| Language Model | Dropdown | Select the LLM provider and model. Use guided experience. |
| Input DataFrame | DataFrame | Input DataFrame (list of dicts or DataFrame). Each row is processed independently. |
| Input DataFrame | Table | Input DataFrame (list of dicts or DataFrame). Each row is processed independently. |
| Schema | Table | Define the structure and types for generated columns. See the component's schema definition. |
| Instructions | String | Natural language instructions for transforming each row into the output schema. |
| As List | Boolean | If true, generate multiple instances of the schema per row and concatenate. |
Expand Down Expand Up @@ -138,7 +138,7 @@ It sums revenue into `total_revenue`, identifies the best-selling product in `be
| Name | Type | Description |
|------|------|-------------|
| Language Model | Dropdown | Select the LLM provider and model. |
| Input DataFrame | DataFrame | Input DataFrame (list of dicts or DataFrame). Required. |
| Input DataFrame | Table | Input DataFrame (list of dicts or DataFrame). Required. |
| Schema | Table | Define the structure and types for the aggregated output. See the component's schema definition. |
| As List | Boolean | If true, output is a list of instances of the schema. |
| Instructions | String | Optional instructions for aggregation. If omitted, the LLM infers from field descriptions. |
Expand Down
8 changes: 4 additions & 4 deletions docs/docs/Components/bundles-amazon.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -67,8 +67,8 @@ For more information about using embedding model components in flows, see [Embed
## S3 Bucket Uploader

The **S3 Bucket Uploader** component uploads files to an Amazon S3 bucket.
It is designed to process `Data` input from a **Read File** or **Directory** component.
If you upload `Data` from other components, test the results before running the flow in production.
It is designed to process `JSON` input from a **Read File** or **Directory** component.
If you upload `JSON` from other components, test the results before running the flow in production.

Requires the `boto3` package, which is included in your Langflow installation.

Expand All @@ -83,8 +83,8 @@ The component produces logs but it doesn't emit output to the flow.
| **AWS Access Key ID** | SecretString | Input parameter. AWS Access Key ID for authentication. |
| **AWS Secret Key** | SecretString | Input parameter. AWS Secret Key for authentication. |
| **Bucket Name** | String | Input parameter. The name of the S3 bucket to upload files to. |
| **Strategy for file upload** | String | Input parameter. The file upload strategy. **Store Data** (default) iterates over `Data` inputs, logs the file path and text content, and uploads each file to the specified S3 bucket if both file path and text content are available. **Store Original File** iterates through the list of data inputs, retrieves the file path from each data item, uploads the file to the specified S3 bucket if the file path is available, and logs the file path being uploaded. |
| **Data Inputs** | Data | Input parameter. The `Data` input to iterate over and upload as files in the specified S3 bucket. |
| **Strategy for file upload** | String | Input parameter. The file upload strategy. **Store Data** (default) iterates over `JSON` inputs, logs the file path and text content, and uploads each file to the specified S3 bucket if both file path and text content are available. **Store Original File** iterates through the list of data inputs, retrieves the file path from each data item, uploads the file to the specified S3 bucket if the file path is available, and logs the file path being uploaded. |
| **Data Inputs** | JSON | Input parameter. The `JSON` input to iterate over and upload as files in the specified S3 bucket. |
| **S3 Prefix** | String | Input parameter. Optional prefix (folder path) within the S3 bucket where files will be uploaded. |
| **Strip Path** | Boolean | Input parameter. Whether to strip the file path when uploading. Default: `false`. |

Expand Down
2 changes: 1 addition & 1 deletion docs/docs/Components/bundles-apify.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ The component can be used to perform tasks as a standalone step in a flow or as

To enable **Tool Mode** for this component, change the component's output type from **Output** to **Tool**, and then connect it to the **Tools** port on an **Agent** component.

**Apify Actors** components output the results of the Actor run as a JSON object in Langflow's [`Data` type](/data-types#data).
**Apify Actors** components output the results of the Actor run as a JSON object in Langflow's [`JSON` type](/data-types#json).

## Example Apify Actors flows

Expand Down
2 changes: 1 addition & 1 deletion docs/docs/Components/bundles-arxiv.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ This page describes the components that are available in the **arXiv** bundle.

This component searches and retrieves papers from [arXiv.org](https://arXiv.org).

It returns a list of search results as a [`DataFrame`](/data-types#dataframe).
It returns a list of search results as a [`Table`](/data-types#table).

### arXiv search parameters

Expand Down
2 changes: 1 addition & 1 deletion docs/docs/Components/bundles-bing.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ This page describes the components that are available in the **Bing** bundle.

This component allows you to call the Bing Search API.

It returns a list of search results as a [`DataFrame`](/data-types#dataframe).
It returns a list of search results as a [`Table`](/data-types#table).

### Bing Search API parameters

Expand Down
2 changes: 1 addition & 1 deletion docs/docs/Components/bundles-cassandra.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ For information about accepted values and functionality, see [Vector search in C
| setup_mode | String | Input parameter. Configuration mode for setting up a Cassandra table. |
| cluster_kwargs | Dict | Input parameter. Additional keyword arguments for a Cassandra cluster. |
| search_query | String | Input parameter. Query string for similarity search. Only relevant for reads. |
| ingest_data | Data | Input parameter. Data to be loaded into the vector store as raw chunks and embeddings. Only relevant for writes. |
| ingest_data | JSON | Input parameter. Data to be loaded into the vector store as raw chunks and embeddings. Only relevant for writes. |
| embedding | Embeddings | Input parameter. Embedding function to use. |
| number_of_results | Integer | Input parameter. Number of results to return in search. Only relevant for reads. |
| search_type | String | Input parameter. Type of search to perform. Only relevant for reads. |
Expand Down
4 changes: 2 additions & 2 deletions docs/docs/Components/bundles-chroma.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ The following example flow uses one **Chroma DB** component for both reads and w

![ChromaDB receiving split text](/img/component-chroma-db.png)

* When writing, it splits `Data` from a [**URL** component](/url) into chunks, computes embeddings with attached **Embedding Model** component, and then loads the chunks and embeddings into the Chroma vector store.
* When writing, it splits `JSON` from a [**URL** component](/url) into chunks, computes embeddings with attached **Embedding Model** component, and then loads the chunks and embeddings into the Chroma vector store.
To trigger writes, click <Icon name="Play" aria-hidden="true"/> **Run component** on the **Chroma DB** component.

* When reading, it uses chat input to perform a similarity search on the vector store, and then print the search results to the chat.
Expand All @@ -61,7 +61,7 @@ For information about accepted values and functionality, see the provider's docu
|------|------|-------------|
| **Collection Name** (`collection_name`) | String | Input parameter. The name of your Chroma vector store collection. Default: `langflow`. |
| **Persist Directory** (`persist_directory`) | String | Input parameter. To persist the Chroma database, enter a relative or absolute path to a directory to store the `chroma.sqlite3` file. Leave empty for an ephemeral database. When reading or writing to an existing persistent database, specify the path to the persistent directory. |
| **Ingest Data** (`ingest_data`) | Data or DataFrame | Input parameter. `Data` or `DataFrame` input containing the records to write to the vector store. Only relevant for writes. |
| **Ingest Data** (`ingest_data`) | JSON or Table | Input parameter. `JSON` or `Table` input containing the records to write to the vector store. Only relevant for writes. |
| **Search Query** (`search_query`) | String | Input parameter. The query to use for vector search. Only relevant for reads. |
| **Cache Vector Store** (`cache_vector_store`) | Boolean | Input parameter. If `true`, the component caches the vector store in memory for faster reads. Default: Enabled (`true`). |
| **Embedding** (`embedding`) | Embeddings | Input parameter. The embedding function to use for the vector store. By default, Chroma DB uses its built-in embeddings model, or you can attach an **Embedding Model** component to use a different provider or model. |
Expand Down
4 changes: 2 additions & 2 deletions docs/docs/Components/bundles-cohere.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ For more information about using embedding model components in flows, see [Embed

This component finds and reranks documents using the Cohere API.

Outputs `Data` containing the reranked documents, limited by the **Top N** parameter.
Outputs `JSON` containing the reranked documents, limited by the **Top N** parameter.

### Cohere Rerank parameters

Expand All @@ -66,7 +66,7 @@ Outputs `Data` containing the reranked documents, limited by the **Top N** param
| Name | Type | Description |
|------|------|-------------|
| **Search Query** | String | Input parameter. The search query for reranking documents. |
| **Search Results** | Data | Input parameter. Connect search results output from a vector store component. Use this parameter to apply reranking after running a similarity search on your vector database. |
| **Search Results** | JSON | Input parameter. Connect search results output from a vector store component. Use this parameter to apply reranking after running a similarity search on your vector database. |
| **Top N** | Integer | Input parameter. The number of documents to return after reranking. Default: `3`. |
| **Cohere API Key** | SecretString | Input parameter. Your Cohere API key. |
| **Model** | String | Input parameter. The re-ranker model to use. Default: `rerank-english-v3.0` |
6 changes: 3 additions & 3 deletions docs/docs/Components/bundles-composio.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -230,7 +230,7 @@ For example, if you are using the Composio **Gmail** component, your Composio AP
When used as tools for an agent, Composio components output [`Tools`](/data-types#tool), which is a list of tools for use by an agent.
When called by the agent, the response from the Composio service is ingested by the agent, not passed directly as output to the user or application.

In non-agentic use cases, the output is a [`DataFrame`](/data-types#dataframe) containing the response from the specified Composio service, depending on the component and action used in the flow.
In non-agentic use cases, the output is a [`Table`](/data-types#table) containing the response from the specified Composio service, depending on the component and action used in the flow.

Because the **Composio Tools** component supports _only_ agentic use, it cannot output `DataFrame`.
All single-service Composio components can output either `DataFrame` or `Tools`.
Because the **Composio Tools** component supports _only_ agentic use, it cannot output `Table`.
All single-service Composio components can output either `Table` or `Tools`.
2 changes: 1 addition & 1 deletion docs/docs/Components/bundles-couchbase.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ For information about accepted values and functionality, see the [Couchbase docu
| scope_name | String | Input parameter. Name of the Couchbase scope. Required. |
| collection_name | String | Input parameter. Name of the Couchbase collection. Required. |
| index_name | String | Input parameter. Name of the Couchbase index. Required. |
| ingest_data | Data | Input parameter. The records to load into the vector store. Only relevant for writes. |
| ingest_data | JSON | Input parameter. The records to load into the vector store. Only relevant for writes. |
| search_query | String | Input parameter. The query string for vector search. Only relevant for reads. |
| cache_vector_store | Boolean | Input parameter. If `true`, the component caches the vector store in memory for faster reads. Default: Enabled (`true`). |
| embedding | Embeddings | Input parameter. The embedding function to use for the vector store. |
Expand Down
Loading
Loading