You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* imageGen: Gemini Imagen 4.0 Generate model for standard image generation
113
+
* When to use: This model is suitable for standard image generation tasks, offering a balance between quality and computational efficiency. It is ideal for applications ranging from simple illustrations to basic visualizations.
114
+
* Why use: Choose this model when you need to generate images quickly and efficiently, balancing the need for decent quality with the desire to keep costs down.
* imageUltra: Gemini Imagen 4.0 Ultra model for high-resolution image generation
119
+
* When to use: This model is ideal for applications that require ultra-high-resolution images with exceptional detail and quality, such as professional design work, marketing materials, and high-end visual content creation.
120
+
* Why use: Choose this model when image quality is a top priority and you need the best possible resolution and detail for your visual assets, despite the higher computational costs involved.
* imageFast: Gemini Imagen 4.0 Fast model for rapid image generation
125
+
* When to use: This model is designed for scenarios where quick turnaround times are essential, such as real-time applications, rapid prototyping, or situations where speed is prioritized over ultra-high resolution.
126
+
* Why use: Choose this model when you need to generate images swiftly without significantly compromising on quality, making it ideal for dynamic content creation and interactive applications.
* googleWithMemoryLite: Google Generative AI Flash Lite model integrated with Supermemory for enhanced conversational capabilities
150
+
* When to use: This configuration is ideal for budget-conscious applications that still require context-aware interactions, such as lightweight chatbots and virtual assistants. It leverages Supermemory to provide relevant historical context in conversations while minimizing costs.
151
+
* Why use: Utilize this setup when you want to enhance user interactions by incorporating past conversations and relevant information while keeping costs low.
152
+
* Example
153
+
* use cases:
154
+
* A budget-friendly customer support chatbot that remembers previous interactions with users to provide tailored assistance.
155
+
* A lightweight virtual assistant that maintains context across multiple user requests for more coherent interactions.
156
+
* An educational tutor bot that recalls past lessons and user progress to adapt its teaching approach on a budget.
157
+
* A travel planning assistant that considers past queries about destinations and preferences to suggest personalized itineraries without incurring high costs.
158
+
* A medical consultation tool that takes into account patient history and previous consultations to offer more informed advice while being cost-effective.
159
+
* Configuration details: This setup uses the 'mastra' container tag for memory search, includes a conversation ID for grouping messages, and operates in 'full' mode to maximize context retrieval. It is configured to always add relevant memories to the prompts while utilizing the cost-effective Flash Lite model.
* superGoogle: Google Generative AI model integrated with Supermemory for enhanced conversational capabilities
169
+
* When to use: This configuration is ideal for applications that require context-aware interactions, such as chatbots, virtual assistants, and customer support systems. It leverages Supermemory to provide relevant historical context in conversations.
170
+
* Why use: Utilize this setup when you want to enhance user interactions by incorporating past conversations and relevant information, leading to more personalized and accurate responses.
171
+
* Example use cases:
172
+
* A customer support chatbot that remembers previous interactions with users to provide tailored assistance.
173
+
* A virtual assistant that maintains context across multiple user requests for more coherent interactions.
174
+
* An educational tutor bot that recalls past lessons and user progress to adapt its teaching approach.
175
+
* A travel planning assistant that considers past queries about destinations and preferences to suggest personalized itineraries.
176
+
* A medical consultation tool that takes into account patient history and previous consultations to offer more informed advice.
177
+
* Configuration details: This setup uses the 'mastra' container tag for memory search, includes a conversation ID for grouping messages, and operates in 'full' mode to maximize context retrieval. It is configured to always add relevant memories to the prompts.
* superGoogleLite: Google Generative AI Flash Lite model integrated with Supermemory for enhanced conversational capabilities
187
+
* When to use: This configuration is ideal for budget-conscious applications that still require context-aware interactions, such as lightweight chatbots and virtual assistants. It leverages Supermemory to provide relevant historical context in conversations while minimizing costs.
188
+
* Why use: Utilize this setup when you want to enhance user interactions by incorporating past conversations and relevant information while keeping costs low.
189
+
* Example
190
+
* use cases:
191
+
* A budget-friendly customer support chatbot that remembers previous interactions with users to provide tailored assistance.
192
+
* A lightweight virtual assistant that maintains context across multiple user requests for more coherent interactions.
193
+
* An educational tutor bot that recalls past lessons and user progress to adapt its teaching approach on a budget.
194
+
* A travel planning assistant that considers past queries about destinations and preferences to suggest personalized itineraries without incurring high costs.
195
+
* A medical consultation tool that takes into account patient history and previous consultations to offer more informed advice while being cost-effective.
196
+
* Configuration details: This setup uses the 'mastra' container tag for memory search, includes a conversation ID for grouping messages, and operates in 'full' mode to maximize context retrieval. It is configured to always add relevant memories to the prompts while utilizing the cost-effective Flash Lite model.
0 commit comments